Test Report: KVM_Linux_crio 22230

                    
                      c636a8658fdd5cfdd18416b9a30087c97060a836:2025-12-19:42856
                    
                

Test fail (43/431)

Order failed test Duration
46 TestAddons/parallel/Ingress 161.17
99 TestFunctional/parallel/DashboardCmd 301.9
106 TestFunctional/parallel/ServiceCmdConnect 602.49
123 TestFunctional/parallel/ServiceCmd/DeployApp 600.51
159 TestFunctional/parallel/ServiceCmd/HTTPS 0.23
160 TestFunctional/parallel/ServiceCmd/Format 0.24
161 TestFunctional/parallel/ServiceCmd/URL 0.23
192 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 301.94
199 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 602.69
216 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 600.64
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.23
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.23
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.23
345 TestPreload 146.77
383 TestPause/serial/SecondStartNoReconfiguration 67.19
391 TestISOImage/Binaries/crictl 0
392 TestISOImage/Binaries/curl 0
393 TestISOImage/Binaries/docker 0
394 TestISOImage/Binaries/git 0
395 TestISOImage/Binaries/iptables 0
396 TestISOImage/Binaries/podman 0
397 TestISOImage/Binaries/rsync 0
398 TestISOImage/Binaries/socat 0
399 TestISOImage/Binaries/wget 0
400 TestISOImage/Binaries/VBoxControl 0
401 TestISOImage/Binaries/VBoxService 0
474 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 542.6
477 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.41
478 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.28
479 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.53
480 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 542.59
481 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 542.15
482 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 541.99
483 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 541.93
496 TestISOImage/PersistentMounts//data 0
497 TestISOImage/PersistentMounts//var/lib/docker 0
498 TestISOImage/PersistentMounts//var/lib/cni 0
499 TestISOImage/PersistentMounts//var/lib/kubelet 0
500 TestISOImage/PersistentMounts//var/lib/minikube 0
501 TestISOImage/PersistentMounts//var/lib/toolbox 0
502 TestISOImage/PersistentMounts//var/lib/boot2docker 0
503 TestISOImage/VersionJSON 0
504 TestISOImage/eBPFSupport 0
x
+
TestAddons/parallel/Ingress (161.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-959667 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-959667 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-959667 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [adfec46a-88fb-45a3-a47c-4b9e6b5a439b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [adfec46a-88fb-45a3-a47c-4b9e6b5a439b] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 16.015723425s
I1219 02:28:31.864193    8937 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959667 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.34426212s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-959667 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.204
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-959667 -n addons-959667
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-959667 logs -n 25: (1.016602969s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-064321                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-064321 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ start   │ --download-only -p binary-mirror-761262 --alsologtostderr --binary-mirror http://127.0.0.1:37093 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-761262 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ delete  │ -p binary-mirror-761262                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-761262 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ addons  │ disable dashboard -p addons-959667                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ addons  │ enable dashboard -p addons-959667                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ start   │ -p addons-959667 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:27 UTC │
	│ addons  │ addons-959667 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:27 UTC │ 19 Dec 25 02:27 UTC │
	│ addons  │ addons-959667 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:28 UTC │ 19 Dec 25 02:28 UTC │
	│ addons  │ enable headlamp -p addons-959667 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:28 UTC │ 19 Dec 25 02:28 UTC │
	│ addons  │ addons-959667 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:28 UTC │ 19 Dec 25 02:28 UTC │
	│ addons  │ addons-959667 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:28 UTC │ 19 Dec 25 02:28 UTC │
	│ addons  │ addons-959667 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:28 UTC │ 19 Dec 25 02:28 UTC │
	│ addons  │ addons-959667 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:28 UTC │ 19 Dec 25 02:28 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-959667                                                                                                                                                                                                                                                                                                                                                                                         │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:28 UTC │ 19 Dec 25 02:28 UTC │
	│ addons  │ addons-959667 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:28 UTC │ 19 Dec 25 02:28 UTC │
	│ addons  │ addons-959667 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:28 UTC │ 19 Dec 25 02:28 UTC │
	│ ip      │ addons-959667 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:28 UTC │ 19 Dec 25 02:28 UTC │
	│ addons  │ addons-959667 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:28 UTC │ 19 Dec 25 02:28 UTC │
	│ ssh     │ addons-959667 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:28 UTC │                     │
	│ addons  │ addons-959667 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:28 UTC │ 19 Dec 25 02:28 UTC │
	│ ssh     │ addons-959667 ssh cat /opt/local-path-provisioner/pvc-f78e963f-a5db-43da-8670-e54bf8a0fc73_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:28 UTC │ 19 Dec 25 02:28 UTC │
	│ addons  │ addons-959667 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:28 UTC │ 19 Dec 25 02:29 UTC │
	│ addons  │ addons-959667 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:29 UTC │ 19 Dec 25 02:29 UTC │
	│ addons  │ addons-959667 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:29 UTC │ 19 Dec 25 02:29 UTC │
	│ ip      │ addons-959667 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-959667        │ jenkins │ v1.37.0 │ 19 Dec 25 02:30 UTC │ 19 Dec 25 02:30 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:25:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:25:41.119317    9864 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:25:41.119550    9864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:41.119558    9864 out.go:374] Setting ErrFile to fd 2...
	I1219 02:25:41.119563    9864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:41.119781    9864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 02:25:41.120271    9864 out.go:368] Setting JSON to false
	I1219 02:25:41.120998    9864 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":485,"bootTime":1766110656,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:25:41.121044    9864 start.go:143] virtualization: kvm guest
	I1219 02:25:41.122404    9864 out.go:179] * [addons-959667] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:25:41.123402    9864 notify.go:221] Checking for updates...
	I1219 02:25:41.123412    9864 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:25:41.124542    9864 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:25:41.125453    9864 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 02:25:41.126412    9864 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 02:25:41.127258    9864 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:25:41.128189    9864 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:25:41.129176    9864 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:25:41.156449    9864 out.go:179] * Using the kvm2 driver based on user configuration
	I1219 02:25:41.157318    9864 start.go:309] selected driver: kvm2
	I1219 02:25:41.157329    9864 start.go:928] validating driver "kvm2" against <nil>
	I1219 02:25:41.157338    9864 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:25:41.158027    9864 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:25:41.158278    9864 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 02:25:41.158306    9864 cni.go:84] Creating CNI manager for ""
	I1219 02:25:41.158365    9864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 02:25:41.158376    9864 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1219 02:25:41.158419    9864 start.go:353] cluster config:
	{Name:addons-959667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-959667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1219 02:25:41.158537    9864 iso.go:125] acquiring lock: {Name:mk42290af04a74a4dddf27fa33aac85c9bdccfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 02:25:41.159608    9864 out.go:179] * Starting "addons-959667" primary control-plane node in "addons-959667" cluster
	I1219 02:25:41.160425    9864 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 02:25:41.160447    9864 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 02:25:41.160454    9864 cache.go:65] Caching tarball of preloaded images
	I1219 02:25:41.160531    9864 preload.go:238] Found /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 02:25:41.160541    9864 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 02:25:41.160848    9864 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/config.json ...
	I1219 02:25:41.160873    9864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/config.json: {Name:mk098fabf9ef37f8d9be0ff5ae773a74b79016fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:41.161034    9864 start.go:360] acquireMachinesLock for addons-959667: {Name:mk229398d29442d4d52885bbac963e5004bfbfba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 02:25:41.161093    9864 start.go:364] duration metric: took 43.105µs to acquireMachinesLock for "addons-959667"
	I1219 02:25:41.161114    9864 start.go:93] Provisioning new machine with config: &{Name:addons-959667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:addons-959667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 02:25:41.161173    9864 start.go:125] createHost starting for "" (driver="kvm2")
	I1219 02:25:41.162302    9864 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1219 02:25:41.162451    9864 start.go:159] libmachine.API.Create for "addons-959667" (driver="kvm2")
	I1219 02:25:41.162479    9864 client.go:173] LocalClient.Create starting
	I1219 02:25:41.162559    9864 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem
	I1219 02:25:41.330183    9864 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem
	I1219 02:25:41.479729    9864 main.go:144] libmachine: creating domain...
	I1219 02:25:41.479748    9864 main.go:144] libmachine: creating network...
	I1219 02:25:41.481034    9864 main.go:144] libmachine: found existing default network
	I1219 02:25:41.481233    9864 main.go:144] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1219 02:25:41.481765    9864 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e3a490}
	I1219 02:25:41.481853    9864 main.go:144] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-959667</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1219 02:25:41.487310    9864 main.go:144] libmachine: creating private network mk-addons-959667 192.168.39.0/24...
	I1219 02:25:41.547778    9864 main.go:144] libmachine: private network mk-addons-959667 192.168.39.0/24 created
	I1219 02:25:41.548051    9864 main.go:144] libmachine: <network>
	  <name>mk-addons-959667</name>
	  <uuid>16ddc597-0b87-44bd-9757-be0dd243209d</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:14:77:86'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1219 02:25:41.548084    9864 main.go:144] libmachine: setting up store path in /home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667 ...
	I1219 02:25:41.548122    9864 main.go:144] libmachine: building disk image from file:///home/jenkins/minikube-integration/22230-5010/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso
	I1219 02:25:41.548135    9864 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 02:25:41.548200    9864 main.go:144] libmachine: Downloading /home/jenkins/minikube-integration/22230-5010/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22230-5010/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso...
	I1219 02:25:41.836650    9864 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa...
	I1219 02:25:41.886014    9864 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/addons-959667.rawdisk...
	I1219 02:25:41.886051    9864 main.go:144] libmachine: Writing magic tar header
	I1219 02:25:41.886072    9864 main.go:144] libmachine: Writing SSH key tar header
	I1219 02:25:41.886146    9864 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667 ...
	I1219 02:25:41.886214    9864 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667
	I1219 02:25:41.886235    9864 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667 (perms=drwx------)
	I1219 02:25:41.886244    9864 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22230-5010/.minikube/machines
	I1219 02:25:41.886256    9864 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22230-5010/.minikube/machines (perms=drwxr-xr-x)
	I1219 02:25:41.886266    9864 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 02:25:41.886277    9864 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22230-5010/.minikube (perms=drwxr-xr-x)
	I1219 02:25:41.886285    9864 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22230-5010
	I1219 02:25:41.886295    9864 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22230-5010 (perms=drwxrwxr-x)
	I1219 02:25:41.886303    9864 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1219 02:25:41.886322    9864 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1219 02:25:41.886338    9864 main.go:144] libmachine: checking permissions on dir: /home/jenkins
	I1219 02:25:41.886348    9864 main.go:144] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1219 02:25:41.886357    9864 main.go:144] libmachine: checking permissions on dir: /home
	I1219 02:25:41.886366    9864 main.go:144] libmachine: skipping /home - not owner
	I1219 02:25:41.886369    9864 main.go:144] libmachine: defining domain...
	I1219 02:25:41.887420    9864 main.go:144] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-959667</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/addons-959667.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-959667'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1219 02:25:41.894002    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:8d:75:e4 in network default
	I1219 02:25:41.894618    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:25:41.894637    9864 main.go:144] libmachine: starting domain...
	I1219 02:25:41.894644    9864 main.go:144] libmachine: ensuring networks are active...
	I1219 02:25:41.895306    9864 main.go:144] libmachine: Ensuring network default is active
	I1219 02:25:41.895691    9864 main.go:144] libmachine: Ensuring network mk-addons-959667 is active
	I1219 02:25:41.896244    9864 main.go:144] libmachine: getting domain XML...
	I1219 02:25:41.897283    9864 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-959667</name>
	  <uuid>ec9b0108-611c-411b-b107-4f85b7cff5e9</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/addons-959667.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:93:8d:ce'/>
	      <source network='mk-addons-959667'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:8d:75:e4'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 02:25:43.129874    9864 main.go:144] libmachine: waiting for domain to start...
	I1219 02:25:43.131108    9864 main.go:144] libmachine: domain is now running
	I1219 02:25:43.131125    9864 main.go:144] libmachine: waiting for IP...
	I1219 02:25:43.131770    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:25:43.132167    9864 main.go:144] libmachine: no network interface addresses found for domain addons-959667 (source=lease)
	I1219 02:25:43.132179    9864 main.go:144] libmachine: trying to list again with source=arp
	I1219 02:25:43.133102    9864 main.go:144] libmachine: unable to find current IP address of domain addons-959667 in network mk-addons-959667 (interfaces detected: [])
	I1219 02:25:43.133139    9864 retry.go:31] will retry after 236.790824ms: waiting for domain to come up
	I1219 02:25:43.371464    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:25:43.371978    9864 main.go:144] libmachine: no network interface addresses found for domain addons-959667 (source=lease)
	I1219 02:25:43.371991    9864 main.go:144] libmachine: trying to list again with source=arp
	I1219 02:25:43.372221    9864 main.go:144] libmachine: unable to find current IP address of domain addons-959667 in network mk-addons-959667 (interfaces detected: [])
	I1219 02:25:43.372253    9864 retry.go:31] will retry after 358.392586ms: waiting for domain to come up
	I1219 02:25:43.732623    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:25:43.733116    9864 main.go:144] libmachine: no network interface addresses found for domain addons-959667 (source=lease)
	I1219 02:25:43.733129    9864 main.go:144] libmachine: trying to list again with source=arp
	I1219 02:25:43.733399    9864 main.go:144] libmachine: unable to find current IP address of domain addons-959667 in network mk-addons-959667 (interfaces detected: [])
	I1219 02:25:43.733440    9864 retry.go:31] will retry after 445.525274ms: waiting for domain to come up
	I1219 02:25:44.181015    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:25:44.181543    9864 main.go:144] libmachine: no network interface addresses found for domain addons-959667 (source=lease)
	I1219 02:25:44.181558    9864 main.go:144] libmachine: trying to list again with source=arp
	I1219 02:25:44.181866    9864 main.go:144] libmachine: unable to find current IP address of domain addons-959667 in network mk-addons-959667 (interfaces detected: [])
	I1219 02:25:44.181910    9864 retry.go:31] will retry after 508.403403ms: waiting for domain to come up
	I1219 02:25:44.691472    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:25:44.691958    9864 main.go:144] libmachine: no network interface addresses found for domain addons-959667 (source=lease)
	I1219 02:25:44.691973    9864 main.go:144] libmachine: trying to list again with source=arp
	I1219 02:25:44.692222    9864 main.go:144] libmachine: unable to find current IP address of domain addons-959667 in network mk-addons-959667 (interfaces detected: [])
	I1219 02:25:44.692247    9864 retry.go:31] will retry after 476.03767ms: waiting for domain to come up
	I1219 02:25:45.169866    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:25:45.170410    9864 main.go:144] libmachine: no network interface addresses found for domain addons-959667 (source=lease)
	I1219 02:25:45.170427    9864 main.go:144] libmachine: trying to list again with source=arp
	I1219 02:25:45.170743    9864 main.go:144] libmachine: unable to find current IP address of domain addons-959667 in network mk-addons-959667 (interfaces detected: [])
	I1219 02:25:45.170776    9864 retry.go:31] will retry after 623.799486ms: waiting for domain to come up
	I1219 02:25:45.796484    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:25:45.796956    9864 main.go:144] libmachine: no network interface addresses found for domain addons-959667 (source=lease)
	I1219 02:25:45.796971    9864 main.go:144] libmachine: trying to list again with source=arp
	I1219 02:25:45.797228    9864 main.go:144] libmachine: unable to find current IP address of domain addons-959667 in network mk-addons-959667 (interfaces detected: [])
	I1219 02:25:45.797255    9864 retry.go:31] will retry after 930.899695ms: waiting for domain to come up
	I1219 02:25:46.729656    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:25:46.730137    9864 main.go:144] libmachine: no network interface addresses found for domain addons-959667 (source=lease)
	I1219 02:25:46.730151    9864 main.go:144] libmachine: trying to list again with source=arp
	I1219 02:25:46.730401    9864 main.go:144] libmachine: unable to find current IP address of domain addons-959667 in network mk-addons-959667 (interfaces detected: [])
	I1219 02:25:46.730429    9864 retry.go:31] will retry after 905.219445ms: waiting for domain to come up
	I1219 02:25:47.637321    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:25:47.637846    9864 main.go:144] libmachine: no network interface addresses found for domain addons-959667 (source=lease)
	I1219 02:25:47.637862    9864 main.go:144] libmachine: trying to list again with source=arp
	I1219 02:25:47.638133    9864 main.go:144] libmachine: unable to find current IP address of domain addons-959667 in network mk-addons-959667 (interfaces detected: [])
	I1219 02:25:47.638168    9864 retry.go:31] will retry after 1.793068231s: waiting for domain to come up
	I1219 02:25:49.433976    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:25:49.434511    9864 main.go:144] libmachine: no network interface addresses found for domain addons-959667 (source=lease)
	I1219 02:25:49.434527    9864 main.go:144] libmachine: trying to list again with source=arp
	I1219 02:25:49.434815    9864 main.go:144] libmachine: unable to find current IP address of domain addons-959667 in network mk-addons-959667 (interfaces detected: [])
	I1219 02:25:49.434852    9864 retry.go:31] will retry after 1.578721778s: waiting for domain to come up
	I1219 02:25:51.016505    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:25:51.017094    9864 main.go:144] libmachine: no network interface addresses found for domain addons-959667 (source=lease)
	I1219 02:25:51.017116    9864 main.go:144] libmachine: trying to list again with source=arp
	I1219 02:25:51.017482    9864 main.go:144] libmachine: unable to find current IP address of domain addons-959667 in network mk-addons-959667 (interfaces detected: [])
	I1219 02:25:51.017526    9864 retry.go:31] will retry after 2.320475349s: waiting for domain to come up
	I1219 02:25:53.341073    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:25:53.341486    9864 main.go:144] libmachine: no network interface addresses found for domain addons-959667 (source=lease)
	I1219 02:25:53.341499    9864 main.go:144] libmachine: trying to list again with source=arp
	I1219 02:25:53.341764    9864 main.go:144] libmachine: unable to find current IP address of domain addons-959667 in network mk-addons-959667 (interfaces detected: [])
	I1219 02:25:53.341809    9864 retry.go:31] will retry after 2.484492715s: waiting for domain to come up
	I1219 02:25:55.829315    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:25:55.829808    9864 main.go:144] libmachine: no network interface addresses found for domain addons-959667 (source=lease)
	I1219 02:25:55.829825    9864 main.go:144] libmachine: trying to list again with source=arp
	I1219 02:25:55.830075    9864 main.go:144] libmachine: unable to find current IP address of domain addons-959667 in network mk-addons-959667 (interfaces detected: [])
	I1219 02:25:55.830110    9864 retry.go:31] will retry after 4.122689073s: waiting for domain to come up
	I1219 02:25:59.954329    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:25:59.954876    9864 main.go:144] libmachine: domain addons-959667 has current primary IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:25:59.954895    9864 main.go:144] libmachine: found domain IP: 192.168.39.204
	I1219 02:25:59.954904    9864 main.go:144] libmachine: reserving static IP address...
	I1219 02:25:59.955267    9864 main.go:144] libmachine: unable to find host DHCP lease matching {name: "addons-959667", mac: "52:54:00:93:8d:ce", ip: "192.168.39.204"} in network mk-addons-959667
	I1219 02:26:00.126333    9864 main.go:144] libmachine: reserved static IP address 192.168.39.204 for domain addons-959667
	I1219 02:26:00.126353    9864 main.go:144] libmachine: waiting for SSH...
	I1219 02:26:00.126368    9864 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 02:26:00.129192    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:00.129684    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:minikube Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:00.129715    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:00.129928    9864 main.go:144] libmachine: Using SSH client type: native
	I1219 02:26:00.130155    9864 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1219 02:26:00.130168    9864 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 02:26:00.234507    9864 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 02:26:00.234815    9864 main.go:144] libmachine: domain creation complete
	I1219 02:26:00.236029    9864 machine.go:94] provisionDockerMachine start ...
	I1219 02:26:00.237817    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:00.238122    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:00.238142    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:00.238321    9864 main.go:144] libmachine: Using SSH client type: native
	I1219 02:26:00.238562    9864 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1219 02:26:00.238586    9864 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 02:26:00.338728    9864 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 02:26:00.338769    9864 buildroot.go:166] provisioning hostname "addons-959667"
	I1219 02:26:00.341370    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:00.341754    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:00.341777    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:00.341921    9864 main.go:144] libmachine: Using SSH client type: native
	I1219 02:26:00.342092    9864 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1219 02:26:00.342101    9864 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-959667 && echo "addons-959667" | sudo tee /etc/hostname
	I1219 02:26:00.465450    9864 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-959667
	
	I1219 02:26:00.468059    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:00.468461    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:00.468490    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:00.468678    9864 main.go:144] libmachine: Using SSH client type: native
	I1219 02:26:00.468857    9864 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1219 02:26:00.468871    9864 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-959667' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-959667/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-959667' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 02:26:00.579963    9864 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 02:26:00.579988    9864 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5010/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5010/.minikube}
	I1219 02:26:00.580021    9864 buildroot.go:174] setting up certificates
	I1219 02:26:00.580031    9864 provision.go:84] configureAuth start
	I1219 02:26:00.583176    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:00.583535    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:00.583556    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:00.585597    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:00.585908    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:00.585924    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:00.586044    9864 provision.go:143] copyHostCerts
	I1219 02:26:00.586100    9864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem (1679 bytes)
	I1219 02:26:00.586286    9864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem (1082 bytes)
	I1219 02:26:00.586364    9864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem (1123 bytes)
	I1219 02:26:00.586422    9864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem org=jenkins.addons-959667 san=[127.0.0.1 192.168.39.204 addons-959667 localhost minikube]
	I1219 02:26:00.766616    9864 provision.go:177] copyRemoteCerts
	I1219 02:26:00.766681    9864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 02:26:00.769087    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:00.769426    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:00.769449    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:00.769581    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:00.851052    9864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 02:26:00.877637    9864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1219 02:26:00.905253    9864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 02:26:00.931488    9864 provision.go:87] duration metric: took 351.440696ms to configureAuth
	I1219 02:26:00.931517    9864 buildroot.go:189] setting minikube options for container-runtime
	I1219 02:26:00.932033    9864 config.go:182] Loaded profile config "addons-959667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:00.934895    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:00.935218    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:00.935238    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:00.935375    9864 main.go:144] libmachine: Using SSH client type: native
	I1219 02:26:00.935599    9864 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1219 02:26:00.935614    9864 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 02:26:01.156603    9864 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 02:26:01.156636    9864 machine.go:97] duration metric: took 920.590835ms to provisionDockerMachine
	I1219 02:26:01.156647    9864 client.go:176] duration metric: took 19.994158489s to LocalClient.Create
	I1219 02:26:01.156664    9864 start.go:167] duration metric: took 19.994215315s to libmachine.API.Create "addons-959667"
	I1219 02:26:01.156672    9864 start.go:293] postStartSetup for "addons-959667" (driver="kvm2")
	I1219 02:26:01.156682    9864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 02:26:01.156756    9864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 02:26:01.159677    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:01.160027    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:01.160049    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:01.160189    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:01.242141    9864 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 02:26:01.246624    9864 info.go:137] Remote host: Buildroot 2025.02
	I1219 02:26:01.246647    9864 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/addons for local assets ...
	I1219 02:26:01.246715    9864 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/files for local assets ...
	I1219 02:26:01.246744    9864 start.go:296] duration metric: took 90.066935ms for postStartSetup
	I1219 02:26:01.263469    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:01.263893    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:01.263922    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:01.264129    9864 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/config.json ...
	I1219 02:26:01.300165    9864 start.go:128] duration metric: took 20.138972889s to createHost
	I1219 02:26:01.302751    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:01.303096    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:01.303117    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:01.303282    9864 main.go:144] libmachine: Using SSH client type: native
	I1219 02:26:01.303479    9864 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1219 02:26:01.303489    9864 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 02:26:01.407950    9864 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766111161.377061425
	
	I1219 02:26:01.407982    9864 fix.go:216] guest clock: 1766111161.377061425
	I1219 02:26:01.407991    9864 fix.go:229] Guest: 2025-12-19 02:26:01.377061425 +0000 UTC Remote: 2025-12-19 02:26:01.300188594 +0000 UTC m=+20.224824147 (delta=76.872831ms)
	I1219 02:26:01.408013    9864 fix.go:200] guest clock delta is within tolerance: 76.872831ms
	I1219 02:26:01.408021    9864 start.go:83] releasing machines lock for "addons-959667", held for 20.24691639s
	I1219 02:26:01.410902    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:01.411262    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:01.411282    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:01.411869    9864 ssh_runner.go:195] Run: cat /version.json
	I1219 02:26:01.411956    9864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 02:26:01.414733    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:01.414852    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:01.415155    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:01.415183    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:01.415224    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:01.415244    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:01.415391    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:01.415488    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:01.490874    9864 ssh_runner.go:195] Run: systemctl --version
	I1219 02:26:01.529157    9864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 02:26:02.040122    9864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 02:26:02.046979    9864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 02:26:02.047032    9864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 02:26:02.064890    9864 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 02:26:02.064911    9864 start.go:496] detecting cgroup driver to use...
	I1219 02:26:02.064987    9864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 02:26:02.082510    9864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 02:26:02.097602    9864 docker.go:218] disabling cri-docker service (if available) ...
	I1219 02:26:02.097660    9864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 02:26:02.113297    9864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 02:26:02.128258    9864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 02:26:02.267070    9864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 02:26:02.474434    9864 docker.go:234] disabling docker service ...
	I1219 02:26:02.474495    9864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 02:26:02.490264    9864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 02:26:02.503783    9864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 02:26:02.652712    9864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 02:26:02.790396    9864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 02:26:02.805345    9864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 02:26:02.825957    9864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 02:26:02.826020    9864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:26:02.837153    9864 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1219 02:26:02.837208    9864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:26:02.848312    9864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:26:02.859030    9864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:26:02.869702    9864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 02:26:02.881371    9864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:26:02.891952    9864 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:26:02.910378    9864 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:26:02.921434    9864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 02:26:02.930674    9864 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 02:26:02.930714    9864 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 02:26:02.951281    9864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 02:26:02.963820    9864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 02:26:03.095725    9864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 02:26:03.197361    9864 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 02:26:03.197458    9864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 02:26:03.202878    9864 start.go:564] Will wait 60s for crictl version
	I1219 02:26:03.202947    9864 ssh_runner.go:195] Run: which crictl
	I1219 02:26:03.206510    9864 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 02:26:03.238584    9864 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1219 02:26:03.238717    9864 ssh_runner.go:195] Run: crio --version
	I1219 02:26:03.265245    9864 ssh_runner.go:195] Run: crio --version
	I1219 02:26:03.295667    9864 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1219 02:26:03.299345    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:03.299713    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:03.299739    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:03.299918    9864 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1219 02:26:03.303994    9864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 02:26:03.318163    9864 kubeadm.go:884] updating cluster {Name:addons-959667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
3 ClusterName:addons-959667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 02:26:03.318264    9864 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 02:26:03.318315    9864 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 02:26:03.347192    9864 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1219 02:26:03.347256    9864 ssh_runner.go:195] Run: which lz4
	I1219 02:26:03.351068    9864 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1219 02:26:03.355311    9864 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1219 02:26:03.355337    9864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
	I1219 02:26:04.488659    9864 crio.go:462] duration metric: took 1.137613228s to copy over tarball
	I1219 02:26:04.488747    9864 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1219 02:26:05.889948    9864 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.401173204s)
	I1219 02:26:05.889979    9864 crio.go:469] duration metric: took 1.401283392s to extract the tarball
	I1219 02:26:05.889990    9864 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1219 02:26:05.924592    9864 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 02:26:05.959031    9864 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 02:26:05.959055    9864 cache_images.go:86] Images are preloaded, skipping loading
	I1219 02:26:05.959062    9864 kubeadm.go:935] updating node { 192.168.39.204 8443 v1.34.3 crio true true} ...
	I1219 02:26:05.959136    9864 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-959667 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-959667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 02:26:05.959200    9864 ssh_runner.go:195] Run: crio config
	I1219 02:26:06.001462    9864 cni.go:84] Creating CNI manager for ""
	I1219 02:26:06.001486    9864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 02:26:06.001511    9864 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 02:26:06.001538    9864 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.204 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-959667 NodeName:addons-959667 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 02:26:06.001703    9864 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-959667"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.204"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.204"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 02:26:06.001776    9864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 02:26:06.013109    9864 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 02:26:06.013171    9864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 02:26:06.023876    9864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1219 02:26:06.042079    9864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 02:26:06.060415    9864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1219 02:26:06.078722    9864 ssh_runner.go:195] Run: grep 192.168.39.204	control-plane.minikube.internal$ /etc/hosts
	I1219 02:26:06.082831    9864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 02:26:06.096142    9864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 02:26:06.230724    9864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 02:26:06.251960    9864 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667 for IP: 192.168.39.204
	I1219 02:26:06.251987    9864 certs.go:195] generating shared ca certs ...
	I1219 02:26:06.252009    9864 certs.go:227] acquiring lock for ca certs: {Name:mk433b742ffd55233764aebe3c6f298e0d1f02ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:26:06.252181    9864 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key
	I1219 02:26:06.347544    9864 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt ...
	I1219 02:26:06.347580    9864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt: {Name:mk23ca3cdd90fc1d905b6a243a8981d72ac0a0ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:26:06.347730    9864 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key ...
	I1219 02:26:06.347755    9864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key: {Name:mk434ecbc7a02612f69392e0ae8ebca0107ab100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:26:06.347829    9864 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key
	I1219 02:26:06.451107    9864 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.crt ...
	I1219 02:26:06.451138    9864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.crt: {Name:mkd5fd1ae98e4028e17d14bca3a6de4c8dee5987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:26:06.451310    9864 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key ...
	I1219 02:26:06.451328    9864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key: {Name:mk406a916b9a1c8dfa37e02709146522a27aa608 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:26:06.451423    9864 certs.go:257] generating profile certs ...
	I1219 02:26:06.451483    9864 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.key
	I1219 02:26:06.451499    9864 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt with IP's: []
	I1219 02:26:06.492816    9864 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt ...
	I1219 02:26:06.492842    9864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: {Name:mk62ce3bd350b96511cda14dd19794d9129fa51b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:26:06.492991    9864 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.key ...
	I1219 02:26:06.493001    9864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.key: {Name:mkef26becdac67e53bd6c95827feda38e52d41ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:26:06.493072    9864 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/apiserver.key.9792ed19
	I1219 02:26:06.493090    9864 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/apiserver.crt.9792ed19 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.204]
	I1219 02:26:06.570653    9864 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/apiserver.crt.9792ed19 ...
	I1219 02:26:06.570676    9864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/apiserver.crt.9792ed19: {Name:mke936cf07582e2cfd32718729c6cd55271994a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:26:06.570819    9864 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/apiserver.key.9792ed19 ...
	I1219 02:26:06.570831    9864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/apiserver.key.9792ed19: {Name:mke548a79f9b8b10422df98ab11d11d300ca80cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:26:06.570900    9864 certs.go:382] copying /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/apiserver.crt.9792ed19 -> /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/apiserver.crt
	I1219 02:26:06.570967    9864 certs.go:386] copying /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/apiserver.key.9792ed19 -> /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/apiserver.key
	I1219 02:26:06.571015    9864 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/proxy-client.key
	I1219 02:26:06.571032    9864 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/proxy-client.crt with IP's: []
	I1219 02:26:06.747714    9864 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/proxy-client.crt ...
	I1219 02:26:06.747740    9864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/proxy-client.crt: {Name:mk38df814763e8dea0edb23391633c74caccbaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:26:06.747889    9864 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/proxy-client.key ...
	I1219 02:26:06.747899    9864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/proxy-client.key: {Name:mkdef5815cf96b45ba0f21c473c091a740ac9e83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:26:06.748058    9864 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 02:26:06.748093    9864 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem (1082 bytes)
	I1219 02:26:06.748118    9864 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem (1123 bytes)
	I1219 02:26:06.748141    9864 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem (1679 bytes)
	I1219 02:26:06.748671    9864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 02:26:06.777001    9864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 02:26:06.803067    9864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 02:26:06.829146    9864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 02:26:06.855053    9864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1219 02:26:06.882947    9864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 02:26:06.909078    9864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 02:26:06.936788    9864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 02:26:06.965821    9864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 02:26:06.992814    9864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 02:26:07.012351    9864 ssh_runner.go:195] Run: openssl version
	I1219 02:26:07.018394    9864 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 02:26:07.028716    9864 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 02:26:07.038851    9864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 02:26:07.043368    9864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 02:26:07.043412    9864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 02:26:07.049849    9864 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 02:26:07.059968    9864 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 02:26:07.070164    9864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 02:26:07.074551    9864 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1219 02:26:07.074613    9864 kubeadm.go:401] StartCluster: {Name:addons-959667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 C
lusterName:addons-959667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:26:07.074692    9864 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:26:07.074736    9864 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:26:07.104434    9864 cri.go:92] found id: ""
	I1219 02:26:07.104483    9864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 02:26:07.115738    9864 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 02:26:07.126423    9864 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 02:26:07.136838    9864 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 02:26:07.136851    9864 kubeadm.go:158] found existing configuration files:
	
	I1219 02:26:07.136881    9864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1219 02:26:07.146633    9864 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 02:26:07.146693    9864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 02:26:07.156824    9864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1219 02:26:07.166511    9864 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 02:26:07.166566    9864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 02:26:07.176839    9864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1219 02:26:07.186462    9864 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 02:26:07.186498    9864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 02:26:07.196794    9864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1219 02:26:07.206348    9864 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 02:26:07.206384    9864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 02:26:07.216687    9864 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1219 02:26:07.352548    9864 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1219 02:26:18.954029    9864 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1219 02:26:18.954079    9864 kubeadm.go:319] [preflight] Running pre-flight checks
	I1219 02:26:18.954137    9864 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1219 02:26:18.954259    9864 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1219 02:26:18.954396    9864 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1219 02:26:18.954490    9864 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1219 02:26:18.956509    9864 out.go:252]   - Generating certificates and keys ...
	I1219 02:26:18.956587    9864 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1219 02:26:18.956642    9864 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1219 02:26:18.956700    9864 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1219 02:26:18.956746    9864 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1219 02:26:18.956799    9864 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1219 02:26:18.956866    9864 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1219 02:26:18.956946    9864 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1219 02:26:18.957102    9864 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-959667 localhost] and IPs [192.168.39.204 127.0.0.1 ::1]
	I1219 02:26:18.957151    9864 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1219 02:26:18.957261    9864 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-959667 localhost] and IPs [192.168.39.204 127.0.0.1 ::1]
	I1219 02:26:18.957349    9864 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1219 02:26:18.957433    9864 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1219 02:26:18.957504    9864 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1219 02:26:18.957601    9864 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1219 02:26:18.957684    9864 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1219 02:26:18.957765    9864 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1219 02:26:18.957840    9864 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1219 02:26:18.957927    9864 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1219 02:26:18.958005    9864 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1219 02:26:18.958085    9864 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1219 02:26:18.958146    9864 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1219 02:26:18.959374    9864 out.go:252]   - Booting up control plane ...
	I1219 02:26:18.959454    9864 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1219 02:26:18.959521    9864 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1219 02:26:18.959598    9864 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1219 02:26:18.959687    9864 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1219 02:26:18.959768    9864 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1219 02:26:18.959875    9864 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1219 02:26:18.959956    9864 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1219 02:26:18.960021    9864 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1219 02:26:18.960154    9864 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1219 02:26:18.960250    9864 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1219 02:26:18.960325    9864 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001002409s
	I1219 02:26:18.960400    9864 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1219 02:26:18.960474    9864 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.204:8443/livez
	I1219 02:26:18.960548    9864 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1219 02:26:18.960631    9864 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1219 02:26:18.960701    9864 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.295541264s
	I1219 02:26:18.960757    9864 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.796744707s
	I1219 02:26:18.960836    9864 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501469883s
	I1219 02:26:18.960944    9864 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1219 02:26:18.961069    9864 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1219 02:26:18.961135    9864 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1219 02:26:18.961301    9864 kubeadm.go:319] [mark-control-plane] Marking the node addons-959667 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1219 02:26:18.961372    9864 kubeadm.go:319] [bootstrap-token] Using token: 23md63.zrdlpj5fn3y892ld
	I1219 02:26:18.962697    9864 out.go:252]   - Configuring RBAC rules ...
	I1219 02:26:18.962784    9864 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1219 02:26:18.962875    9864 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1219 02:26:18.962995    9864 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1219 02:26:18.963107    9864 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1219 02:26:18.963202    9864 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1219 02:26:18.963271    9864 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1219 02:26:18.963370    9864 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1219 02:26:18.963411    9864 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1219 02:26:18.963452    9864 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1219 02:26:18.963458    9864 kubeadm.go:319] 
	I1219 02:26:18.963503    9864 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1219 02:26:18.963509    9864 kubeadm.go:319] 
	I1219 02:26:18.963585    9864 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1219 02:26:18.963592    9864 kubeadm.go:319] 
	I1219 02:26:18.963613    9864 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1219 02:26:18.963663    9864 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1219 02:26:18.963749    9864 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1219 02:26:18.963767    9864 kubeadm.go:319] 
	I1219 02:26:18.963846    9864 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1219 02:26:18.963858    9864 kubeadm.go:319] 
	I1219 02:26:18.963927    9864 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1219 02:26:18.963940    9864 kubeadm.go:319] 
	I1219 02:26:18.964013    9864 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1219 02:26:18.964112    9864 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1219 02:26:18.964211    9864 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1219 02:26:18.964219    9864 kubeadm.go:319] 
	I1219 02:26:18.964302    9864 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1219 02:26:18.964392    9864 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1219 02:26:18.964400    9864 kubeadm.go:319] 
	I1219 02:26:18.964515    9864 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 23md63.zrdlpj5fn3y892ld \
	I1219 02:26:18.964694    9864 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9dcfee3bca1209f1e6a2f243188a524f055cd23a8ec5bf91b78f296e51199b49 \
	I1219 02:26:18.964726    9864 kubeadm.go:319] 	--control-plane 
	I1219 02:26:18.964734    9864 kubeadm.go:319] 
	I1219 02:26:18.964827    9864 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1219 02:26:18.964835    9864 kubeadm.go:319] 
	I1219 02:26:18.964899    9864 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 23md63.zrdlpj5fn3y892ld \
	I1219 02:26:18.964995    9864 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9dcfee3bca1209f1e6a2f243188a524f055cd23a8ec5bf91b78f296e51199b49 
	I1219 02:26:18.965004    9864 cni.go:84] Creating CNI manager for ""
	I1219 02:26:18.965010    9864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 02:26:18.966270    9864 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 02:26:18.967385    9864 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 02:26:18.982873    9864 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 02:26:19.002953    9864 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 02:26:19.003031    9864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:26:19.003087    9864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-959667 minikube.k8s.io/updated_at=2025_12_19T02_26_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6 minikube.k8s.io/name=addons-959667 minikube.k8s.io/primary=true
	I1219 02:26:19.038723    9864 ops.go:34] apiserver oom_adj: -16
	I1219 02:26:19.136344    9864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:26:19.636994    9864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:26:20.137346    9864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:26:20.637047    9864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:26:21.136647    9864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:26:21.637248    9864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:26:22.136650    9864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:26:22.637143    9864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:26:23.136763    9864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:26:23.637016    9864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:26:23.747913    9864 kubeadm.go:1114] duration metric: took 4.744927457s to wait for elevateKubeSystemPrivileges
	I1219 02:26:23.747955    9864 kubeadm.go:403] duration metric: took 16.673346126s to StartCluster
	I1219 02:26:23.747980    9864 settings.go:142] acquiring lock: {Name:mkb131693a40b9aa50c302192272518c3561c861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:26:23.748105    9864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 02:26:23.748437    9864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:26:23.748658    9864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1219 02:26:23.748679    9864 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 02:26:23.748735    9864 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1219 02:26:23.748848    9864 addons.go:70] Setting yakd=true in profile "addons-959667"
	I1219 02:26:23.748869    9864 addons.go:239] Setting addon yakd=true in "addons-959667"
	I1219 02:26:23.748868    9864 addons.go:70] Setting inspektor-gadget=true in profile "addons-959667"
	I1219 02:26:23.748889    9864 addons.go:239] Setting addon inspektor-gadget=true in "addons-959667"
	I1219 02:26:23.748902    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.748926    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.748927    9864 config.go:182] Loaded profile config "addons-959667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:23.748901    9864 addons.go:70] Setting registry-creds=true in profile "addons-959667"
	I1219 02:26:23.748957    9864 addons.go:239] Setting addon registry-creds=true in "addons-959667"
	I1219 02:26:23.748977    9864 addons.go:70] Setting registry=true in profile "addons-959667"
	I1219 02:26:23.748973    9864 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-959667"
	I1219 02:26:23.748989    9864 addons.go:239] Setting addon registry=true in "addons-959667"
	I1219 02:26:23.748994    9864 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-959667"
	I1219 02:26:23.749003    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.749009    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.749015    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.749686    9864 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-959667"
	I1219 02:26:23.749735    9864 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-959667"
	I1219 02:26:23.749808    9864 addons.go:70] Setting default-storageclass=true in profile "addons-959667"
	I1219 02:26:23.749836    9864 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-959667"
	I1219 02:26:23.749885    9864 addons.go:70] Setting volcano=true in profile "addons-959667"
	I1219 02:26:23.749902    9864 addons.go:239] Setting addon volcano=true in "addons-959667"
	I1219 02:26:23.749924    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.750026    9864 addons.go:70] Setting gcp-auth=true in profile "addons-959667"
	I1219 02:26:23.750034    9864 addons.go:70] Setting metrics-server=true in profile "addons-959667"
	I1219 02:26:23.750048    9864 mustload.go:66] Loading cluster: addons-959667
	I1219 02:26:23.750053    9864 addons.go:239] Setting addon metrics-server=true in "addons-959667"
	I1219 02:26:23.750081    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.750214    9864 config.go:182] Loaded profile config "addons-959667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:23.750658    9864 addons.go:70] Setting storage-provisioner=true in profile "addons-959667"
	I1219 02:26:23.750829    9864 addons.go:239] Setting addon storage-provisioner=true in "addons-959667"
	I1219 02:26:23.750867    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.750757    9864 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-959667"
	I1219 02:26:23.750921    9864 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-959667"
	I1219 02:26:23.750949    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.750766    9864 addons.go:70] Setting volumesnapshots=true in profile "addons-959667"
	I1219 02:26:23.751059    9864 addons.go:239] Setting addon volumesnapshots=true in "addons-959667"
	I1219 02:26:23.751082    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.751176    9864 out.go:179] * Verifying Kubernetes components...
	I1219 02:26:23.750770    9864 addons.go:70] Setting cloud-spanner=true in profile "addons-959667"
	I1219 02:26:23.751366    9864 addons.go:239] Setting addon cloud-spanner=true in "addons-959667"
	I1219 02:26:23.751389    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.750777    9864 addons.go:70] Setting ingress=true in profile "addons-959667"
	I1219 02:26:23.751450    9864 addons.go:239] Setting addon ingress=true in "addons-959667"
	I1219 02:26:23.751482    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.750782    9864 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-959667"
	I1219 02:26:23.751771    9864 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-959667"
	I1219 02:26:23.751794    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.750788    9864 addons.go:70] Setting ingress-dns=true in profile "addons-959667"
	I1219 02:26:23.752109    9864 addons.go:239] Setting addon ingress-dns=true in "addons-959667"
	I1219 02:26:23.752137    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.752885    9864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 02:26:23.756448    9864 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1219 02:26:23.756449    9864 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1219 02:26:23.756504    9864 out.go:179]   - Using image docker.io/registry:3.0.0
	W1219 02:26:23.757152    9864 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1219 02:26:23.757734    9864 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1219 02:26:23.757744    9864 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1219 02:26:23.757753    9864 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1219 02:26:23.757966    9864 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1219 02:26:23.757984    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1219 02:26:23.758310    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.758673    9864 addons.go:239] Setting addon default-storageclass=true in "addons-959667"
	I1219 02:26:23.758701    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.759020    9864 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-959667"
	I1219 02:26:23.759060    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:23.759219    9864 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1219 02:26:23.759342    9864 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1219 02:26:23.759587    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1219 02:26:23.759915    9864 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1219 02:26:23.759922    9864 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1219 02:26:23.759917    9864 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1219 02:26:23.760623    9864 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1219 02:26:23.760630    9864 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1219 02:26:23.760699    9864 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1219 02:26:23.760976    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1219 02:26:23.761291    9864 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1219 02:26:23.761292    9864 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1219 02:26:23.761738    9864 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1219 02:26:23.761344    9864 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 02:26:23.761765    9864 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 02:26:23.761360    9864 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1219 02:26:23.761807    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1219 02:26:23.762042    9864 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1219 02:26:23.762047    9864 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 02:26:23.762050    9864 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1219 02:26:23.762090    9864 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1219 02:26:23.762114    9864 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1219 02:26:23.762120    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1219 02:26:23.762125    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1219 02:26:23.763433    9864 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1219 02:26:23.763470    9864 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1219 02:26:23.763495    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1219 02:26:23.763534    9864 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 02:26:23.763550    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 02:26:23.764203    9864 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 02:26:23.764220    9864 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 02:26:23.766796    9864 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1219 02:26:23.767557    9864 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1219 02:26:23.767587    9864 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1219 02:26:23.769040    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.769253    9864 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1219 02:26:23.769379    9864 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1219 02:26:23.769402    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1219 02:26:23.769465    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.770267    9864 out.go:179]   - Using image docker.io/busybox:stable
	I1219 02:26:23.770597    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.771082    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:23.771128    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.771290    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:23.771319    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.771559    9864 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1219 02:26:23.771664    9864 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1219 02:26:23.771679    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1219 02:26:23.771962    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:23.771987    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:23.772243    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.772522    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:23.772556    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.773219    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.773499    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:23.774048    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:23.774079    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.774433    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.774729    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.775011    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:23.775038    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.775134    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:23.775330    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.775732    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.775843    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:23.775897    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:23.775927    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.776198    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:23.776223    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.776531    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:23.776621    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.776759    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:23.776792    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.776860    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:23.776931    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:23.776955    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.777096    9864 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1219 02:26:23.777362    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.777385    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:23.777463    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.777459    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:23.778196    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:23.778229    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.778477    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:23.778492    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:23.778517    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.778541    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:23.778583    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.778804    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:23.779399    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:23.779520    9864 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1219 02:26:23.779994    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.780329    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:23.780369    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.780508    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.780535    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:23.780950    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:23.780979    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.781120    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:23.781838    9864 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1219 02:26:23.782864    9864 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1219 02:26:23.783906    9864 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1219 02:26:23.783924    9864 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1219 02:26:23.786596    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.787065    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:23.787099    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:23.787287    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	W1219 02:26:23.965061    9864 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48390->192.168.39.204:22: read: connection reset by peer
	I1219 02:26:23.965088    9864 retry.go:31] will retry after 217.142439ms: ssh: handshake failed: read tcp 192.168.39.1:48390->192.168.39.204:22: read: connection reset by peer
	W1219 02:26:23.965163    9864 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48398->192.168.39.204:22: read: connection reset by peer
	I1219 02:26:23.965173    9864 retry.go:31] will retry after 134.039332ms: ssh: handshake failed: read tcp 192.168.39.1:48398->192.168.39.204:22: read: connection reset by peer
	W1219 02:26:24.100637    9864 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48430->192.168.39.204:22: read: connection reset by peer
	I1219 02:26:24.100666    9864 retry.go:31] will retry after 522.527454ms: ssh: handshake failed: read tcp 192.168.39.1:48430->192.168.39.204:22: read: connection reset by peer
	I1219 02:26:24.260216    9864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 02:26:24.260314    9864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1219 02:26:24.598941    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1219 02:26:24.674986    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 02:26:24.713982    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1219 02:26:24.756294    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 02:26:24.756696    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1219 02:26:24.806995    9864 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1219 02:26:24.807029    9864 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1219 02:26:24.828491    9864 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1219 02:26:24.828518    9864 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1219 02:26:24.853351    9864 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1219 02:26:24.853405    9864 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1219 02:26:24.885183    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1219 02:26:24.954780    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1219 02:26:24.959873    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1219 02:26:24.987397    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1219 02:26:25.020075    9864 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 02:26:25.020102    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1219 02:26:25.113305    9864 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1219 02:26:25.113343    9864 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1219 02:26:25.180478    9864 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1219 02:26:25.180509    9864 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1219 02:26:25.410781    9864 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1219 02:26:25.410812    9864 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1219 02:26:25.462998    9864 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 02:26:25.463027    9864 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 02:26:25.501589    9864 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1219 02:26:25.501625    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1219 02:26:25.750841    9864 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1219 02:26:25.750873    9864 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1219 02:26:25.755285    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1219 02:26:25.768657    9864 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1219 02:26:25.768674    9864 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1219 02:26:25.830461    9864 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1219 02:26:25.830493    9864 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1219 02:26:25.887016    9864 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 02:26:25.887043    9864 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 02:26:25.940463    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1219 02:26:26.036103    9864 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1219 02:26:26.036125    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1219 02:26:26.052071    9864 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1219 02:26:26.052101    9864 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1219 02:26:26.095325    9864 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1219 02:26:26.095351    9864 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1219 02:26:26.170463    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 02:26:26.377491    9864 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1219 02:26:26.377527    9864 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1219 02:26:26.394652    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1219 02:26:26.461364    9864 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1219 02:26:26.461401    9864 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1219 02:26:26.720890    9864 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1219 02:26:26.720918    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1219 02:26:26.874957    9864 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1219 02:26:26.874983    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1219 02:26:27.064287    9864 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.804030725s)
	I1219 02:26:27.064334    9864 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.803986667s)
	I1219 02:26:27.064361    9864 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1219 02:26:27.064417    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.465446646s)
	I1219 02:26:27.065233    9864 node_ready.go:35] waiting up to 6m0s for node "addons-959667" to be "Ready" ...
	I1219 02:26:27.073906    9864 node_ready.go:49] node "addons-959667" is "Ready"
	I1219 02:26:27.073932    9864 node_ready.go:38] duration metric: took 8.655842ms for node "addons-959667" to be "Ready" ...
	I1219 02:26:27.073946    9864 api_server.go:52] waiting for apiserver process to appear ...
	I1219 02:26:27.073992    9864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 02:26:27.215624    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1219 02:26:27.415896    9864 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1219 02:26:27.415925    9864 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1219 02:26:27.580767    9864 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-959667" context rescaled to 1 replicas
	I1219 02:26:27.876103    9864 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1219 02:26:27.876134    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1219 02:26:28.117908    9864 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1219 02:26:28.117936    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1219 02:26:28.266259    9864 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1219 02:26:28.266287    9864 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1219 02:26:28.601191    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1219 02:26:29.385272    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.710233516s)
	I1219 02:26:29.385342    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.671328488s)
	I1219 02:26:29.385363    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.629039421s)
	I1219 02:26:29.385436    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.628712308s)
	I1219 02:26:29.694113    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.808886232s)
	I1219 02:26:30.539557    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.584736333s)
	I1219 02:26:30.539680    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.579754852s)
	I1219 02:26:30.539769    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.552339138s)
	I1219 02:26:31.171409    9864 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1219 02:26:31.174626    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:31.175174    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:31.175209    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:31.175421    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:31.655478    9864 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1219 02:26:31.791536    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.036219693s)
	I1219 02:26:31.791583    9864 addons.go:500] Verifying addon ingress=true in "addons-959667"
	I1219 02:26:31.791654    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.851161683s)
	I1219 02:26:31.791700    9864 addons.go:500] Verifying addon registry=true in "addons-959667"
	I1219 02:26:31.791756    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.621255665s)
	I1219 02:26:31.791784    9864 addons.go:500] Verifying addon metrics-server=true in "addons-959667"
	I1219 02:26:31.791792    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.397100982s)
	I1219 02:26:31.791847    9864 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.717835444s)
	I1219 02:26:31.791929    9864 api_server.go:72] duration metric: took 8.043219018s to wait for apiserver process to appear ...
	I1219 02:26:31.791940    9864 api_server.go:88] waiting for apiserver healthz status ...
	I1219 02:26:31.791966    9864 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I1219 02:26:31.793782    9864 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-959667 service yakd-dashboard -n yakd-dashboard
	
	I1219 02:26:31.794793    9864 out.go:179] * Verifying ingress addon...
	I1219 02:26:31.794798    9864 out.go:179] * Verifying registry addon...
	I1219 02:26:31.797182    9864 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1219 02:26:31.797341    9864 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1219 02:26:31.819321    9864 api_server.go:279] https://192.168.39.204:8443/healthz returned 200:
	ok
	I1219 02:26:31.820890    9864 api_server.go:141] control plane version: v1.34.3
	I1219 02:26:31.820916    9864 api_server.go:131] duration metric: took 28.969164ms to wait for apiserver health ...
	I1219 02:26:31.820927    9864 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 02:26:31.821281    9864 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1219 02:26:31.821298    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:31.821615    9864 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1219 02:26:31.821644    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:31.830291    9864 system_pods.go:59] 15 kube-system pods found
	I1219 02:26:31.830322    9864 system_pods.go:61] "amd-gpu-device-plugin-ndblc" [52e98e31-befb-48ad-b245-22c725d997a8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1219 02:26:31.830330    9864 system_pods.go:61] "coredns-66bc5c9577-7mvw5" [955a34ca-2e9f-4581-b200-58587c45d418] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 02:26:31.830337    9864 system_pods.go:61] "coredns-66bc5c9577-dgzk7" [ca3c94df-203f-4132-ad6e-6bdfc1b48407] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 02:26:31.830341    9864 system_pods.go:61] "etcd-addons-959667" [af742e4d-5870-4815-884c-9814ca9d40b4] Running
	I1219 02:26:31.830346    9864 system_pods.go:61] "kube-apiserver-addons-959667" [26855f89-72c2-46c9-ad41-5bae784441d7] Running
	I1219 02:26:31.830352    9864 system_pods.go:61] "kube-controller-manager-addons-959667" [e31806ed-5d2b-47b5-ba42-a477c770d4a4] Running
	I1219 02:26:31.830362    9864 system_pods.go:61] "kube-ingress-dns-minikube" [9ff76ffc-ec67-4423-9e5f-247c6c467e65] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1219 02:26:31.830366    9864 system_pods.go:61] "kube-proxy-rn72z" [0643a592-a08b-4feb-a8af-ff6845f08be6] Running
	I1219 02:26:31.830370    9864 system_pods.go:61] "kube-scheduler-addons-959667" [a50468f3-60e5-4b62-8ec9-37ac7a8c0f40] Running
	I1219 02:26:31.830375    9864 system_pods.go:61] "metrics-server-85b7d694d7-kdr4c" [b5838bd6-a786-4099-a4ee-b68d665097a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 02:26:31.830380    9864 system_pods.go:61] "nvidia-device-plugin-daemonset-rzr4s" [fa691a5a-f568-49f5-b511-dddccd273edc] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1219 02:26:31.830386    9864 system_pods.go:61] "registry-6b586f9694-n9mgj" [894e49b5-4c73-41a0-8355-e53c7d367f9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1219 02:26:31.830391    9864 system_pods.go:61] "registry-creds-764b6fb674-zgmpc" [9ee1d565-a848-454f-bec6-ac039f0217fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1219 02:26:31.830395    9864 system_pods.go:61] "registry-proxy-zdbp9" [9ed57fbb-7a19-4bdf-8b88-ef375ffb880b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1219 02:26:31.830400    9864 system_pods.go:61] "storage-provisioner" [72e1d437-75f0-405a-9f77-e0fccbf8ac17] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 02:26:31.830407    9864 system_pods.go:74] duration metric: took 9.473901ms to wait for pod list to return data ...
	I1219 02:26:31.830417    9864 default_sa.go:34] waiting for default service account to be created ...
	I1219 02:26:31.846976    9864 default_sa.go:45] found service account: "default"
	I1219 02:26:31.847001    9864 default_sa.go:55] duration metric: took 16.575138ms for default service account to be created ...
	I1219 02:26:31.847010    9864 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 02:26:31.855092    9864 system_pods.go:86] 15 kube-system pods found
	I1219 02:26:31.855120    9864 system_pods.go:89] "amd-gpu-device-plugin-ndblc" [52e98e31-befb-48ad-b245-22c725d997a8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1219 02:26:31.855129    9864 system_pods.go:89] "coredns-66bc5c9577-7mvw5" [955a34ca-2e9f-4581-b200-58587c45d418] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 02:26:31.855136    9864 system_pods.go:89] "coredns-66bc5c9577-dgzk7" [ca3c94df-203f-4132-ad6e-6bdfc1b48407] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 02:26:31.855142    9864 system_pods.go:89] "etcd-addons-959667" [af742e4d-5870-4815-884c-9814ca9d40b4] Running
	I1219 02:26:31.855147    9864 system_pods.go:89] "kube-apiserver-addons-959667" [26855f89-72c2-46c9-ad41-5bae784441d7] Running
	I1219 02:26:31.855152    9864 system_pods.go:89] "kube-controller-manager-addons-959667" [e31806ed-5d2b-47b5-ba42-a477c770d4a4] Running
	I1219 02:26:31.855157    9864 system_pods.go:89] "kube-ingress-dns-minikube" [9ff76ffc-ec67-4423-9e5f-247c6c467e65] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1219 02:26:31.855161    9864 system_pods.go:89] "kube-proxy-rn72z" [0643a592-a08b-4feb-a8af-ff6845f08be6] Running
	I1219 02:26:31.855165    9864 system_pods.go:89] "kube-scheduler-addons-959667" [a50468f3-60e5-4b62-8ec9-37ac7a8c0f40] Running
	I1219 02:26:31.855170    9864 system_pods.go:89] "metrics-server-85b7d694d7-kdr4c" [b5838bd6-a786-4099-a4ee-b68d665097a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 02:26:31.855175    9864 system_pods.go:89] "nvidia-device-plugin-daemonset-rzr4s" [fa691a5a-f568-49f5-b511-dddccd273edc] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1219 02:26:31.855180    9864 system_pods.go:89] "registry-6b586f9694-n9mgj" [894e49b5-4c73-41a0-8355-e53c7d367f9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1219 02:26:31.855185    9864 system_pods.go:89] "registry-creds-764b6fb674-zgmpc" [9ee1d565-a848-454f-bec6-ac039f0217fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1219 02:26:31.855189    9864 system_pods.go:89] "registry-proxy-zdbp9" [9ed57fbb-7a19-4bdf-8b88-ef375ffb880b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1219 02:26:31.855197    9864 system_pods.go:89] "storage-provisioner" [72e1d437-75f0-405a-9f77-e0fccbf8ac17] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 02:26:31.855203    9864 system_pods.go:126] duration metric: took 8.188392ms to wait for k8s-apps to be running ...
	I1219 02:26:31.855213    9864 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 02:26:31.855256    9864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:26:31.877099    9864 addons.go:239] Setting addon gcp-auth=true in "addons-959667"
	I1219 02:26:31.877149    9864 host.go:66] Checking if "addons-959667" exists ...
	I1219 02:26:31.879185    9864 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1219 02:26:31.881928    9864 main.go:144] libmachine: domain addons-959667 has defined MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:31.882370    9864 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:8d:ce", ip: ""} in network mk-addons-959667: {Iface:virbr1 ExpiryTime:2025-12-19 03:25:56 +0000 UTC Type:0 Mac:52:54:00:93:8d:ce Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-959667 Clientid:01:52:54:00:93:8d:ce}
	I1219 02:26:31.882392    9864 main.go:144] libmachine: domain addons-959667 has defined IP address 192.168.39.204 and MAC address 52:54:00:93:8d:ce in network mk-addons-959667
	I1219 02:26:31.882592    9864 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/addons-959667/id_rsa Username:docker}
	I1219 02:26:32.318086    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:32.319619    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:32.517896    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.302223602s)
	W1219 02:26:32.517939    9864 addons.go:479] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1219 02:26:32.517979    9864 retry.go:31] will retry after 303.364724ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1219 02:26:32.817945    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:32.818106    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:32.822191    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1219 02:26:33.345368    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:33.346006    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:33.367140    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.765905618s)
	I1219 02:26:33.367177    9864 addons.go:500] Verifying addon csi-hostpath-driver=true in "addons-959667"
	I1219 02:26:33.367185    9864 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.511904892s)
	I1219 02:26:33.367214    9864 system_svc.go:56] duration metric: took 1.511996381s WaitForService to wait for kubelet
	I1219 02:26:33.367224    9864 kubeadm.go:587] duration metric: took 9.618519258s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 02:26:33.367246    9864 node_conditions.go:102] verifying NodePressure condition ...
	I1219 02:26:33.367247    9864 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.488037709s)
	I1219 02:26:33.369107    9864 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1219 02:26:33.369176    9864 out.go:179] * Verifying csi-hostpath-driver addon...
	I1219 02:26:33.370699    9864 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1219 02:26:33.371181    9864 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1219 02:26:33.371960    9864 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1219 02:26:33.371980    9864 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1219 02:26:33.407586    9864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 02:26:33.407619    9864 node_conditions.go:123] node cpu capacity is 2
	I1219 02:26:33.407638    9864 node_conditions.go:105] duration metric: took 40.377536ms to run NodePressure ...
	I1219 02:26:33.407652    9864 start.go:242] waiting for startup goroutines ...
	I1219 02:26:33.414240    9864 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1219 02:26:33.414262    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:33.525145    9864 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1219 02:26:33.525174    9864 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1219 02:26:33.664515    9864 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1219 02:26:33.664542    9864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1219 02:26:33.721894    9864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1219 02:26:33.805112    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:33.805451    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:33.905432    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:34.304581    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:34.305596    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:34.377648    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:34.818358    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:34.828093    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:34.901900    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:35.304779    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:35.306681    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:35.406538    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:35.822921    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:35.823627    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:35.924893    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:36.304483    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:36.304994    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:36.335623    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.513391713s)
	I1219 02:26:36.335668    9864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.61373692s)
	I1219 02:26:36.336691    9864 addons.go:500] Verifying addon gcp-auth=true in "addons-959667"
	I1219 02:26:36.340468    9864 out.go:179] * Verifying gcp-auth addon...
	I1219 02:26:36.342184    9864 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1219 02:26:36.407261    9864 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1219 02:26:36.407284    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:36.407465    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:36.801557    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:36.802675    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:36.845335    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:36.874758    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:37.301864    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:37.302166    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:37.346092    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:37.375292    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:37.801298    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:37.801484    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:37.845252    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:37.874965    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:38.301973    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:38.304078    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:38.346556    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:38.375495    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:38.804634    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:38.804660    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:38.846361    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:38.877286    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:39.304352    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:39.304877    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:39.345418    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:39.403457    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:39.802419    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:39.805731    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:39.845870    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:39.877713    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:40.302798    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:40.304218    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:40.346173    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:40.375064    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:40.800506    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:40.801698    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:40.846954    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:40.877384    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:41.302720    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:41.303264    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:41.346695    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:41.376516    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:41.801249    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:41.802899    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:41.845556    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:41.874348    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:42.301101    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:42.301124    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:42.345716    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:42.374566    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:42.802280    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:42.802653    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:42.845434    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:42.874936    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:43.300985    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:43.301369    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:43.345211    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:43.375103    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:43.803481    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:43.803642    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:43.847600    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:43.876385    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:44.303015    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:44.303099    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:44.347362    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:44.377117    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:44.803361    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:44.803522    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:44.845994    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:44.876467    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:45.304208    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:45.304641    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:45.347178    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:45.383221    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:45.803962    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:45.806280    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:45.844797    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:45.875081    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:46.302928    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:46.303176    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:46.346621    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:46.377436    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:46.801263    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:46.801365    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:46.844649    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:46.875529    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:47.446927    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:47.447139    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:47.448964    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:47.449214    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:47.801769    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:47.801926    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:47.845352    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:47.877366    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:48.301184    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:48.302284    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:48.346341    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:48.374721    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:48.802234    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:48.802506    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:48.847632    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:48.879630    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:49.350326    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:49.350789    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:49.351970    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:49.375610    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:49.804920    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:49.805903    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:49.846941    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:49.876488    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:50.306304    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:50.307749    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:50.348879    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:50.376196    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:50.802454    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:50.802478    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:50.845778    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:50.875385    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:51.302710    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:51.303957    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:51.346643    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:51.377495    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:51.802244    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:51.802378    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:51.845380    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:51.874672    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:52.300730    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:52.302565    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:52.345128    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:52.375070    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:52.801704    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:52.801879    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:52.847165    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:52.878819    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:53.305879    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:53.306268    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:53.345566    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:53.374371    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:53.804827    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:53.805087    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:53.845437    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:53.874294    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:54.301349    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:54.302516    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:54.347285    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:54.375654    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:54.804418    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:54.805949    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:54.848865    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:54.878209    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:55.301566    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:55.303991    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:55.345971    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:55.375235    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:55.810754    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:55.810795    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:55.846685    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:55.875471    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:56.301958    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:56.302906    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:56.346170    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:56.375745    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:56.804496    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:56.804654    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:56.845710    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:56.874144    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:57.301243    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:57.301291    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:57.345757    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:57.374141    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:57.801249    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:57.801552    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:57.844742    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:57.874803    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:58.301874    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:58.301876    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:58.346656    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:58.374826    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:58.802483    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:58.802512    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:58.845600    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:58.874229    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:59.303756    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:59.305138    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:59.348083    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:59.377178    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:59.802875    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:59.803755    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:59.846043    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:59.876537    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:00.301147    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:00.301699    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:27:00.345638    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:00.375031    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:00.801010    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:00.801199    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:27:00.846252    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:00.874107    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:01.334685    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:27:01.335419    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:01.344710    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:01.375064    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:01.801595    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:01.801974    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:27:01.845524    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:01.874935    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:02.305435    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:27:02.305523    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:02.347244    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:02.374166    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:02.802209    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:02.803374    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:27:02.846162    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:02.876115    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:03.301383    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:27:03.302690    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:03.345463    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:03.374520    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:03.803937    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:27:03.803979    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:03.845920    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:03.875664    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:04.301277    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:04.302017    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:27:04.346358    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:04.374975    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:04.801430    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:27:04.801608    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:04.846083    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:04.874936    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:05.302321    9864 kapi.go:107] duration metric: took 33.504975334s to wait for kubernetes.io/minikube-addons=registry ...
	I1219 02:27:05.303451    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:05.344846    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:05.374457    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:05.801244    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:05.845190    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:05.873428    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:06.301438    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:06.345452    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:06.374340    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:06.802646    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:06.905662    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:06.906815    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:07.301433    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:07.346480    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:07.376130    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:07.838248    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:07.848217    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:07.876344    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:08.301364    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:08.345087    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:08.375204    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:08.801718    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:08.852986    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:08.877189    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:09.300302    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:09.345958    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:09.374930    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:09.801534    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:09.846296    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:09.902566    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:10.301194    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:10.345867    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:10.378149    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:10.801297    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:10.847944    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:10.875849    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:11.301631    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:11.348253    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:11.374742    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:11.801540    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:11.845610    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:11.877794    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:12.304966    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:12.409304    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:12.409885    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:12.801298    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:12.845441    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:12.874589    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:13.308765    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:13.404187    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:13.405043    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:13.800579    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:13.845873    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:13.875129    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:14.302779    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:14.346390    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:14.375660    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:14.802458    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:14.844715    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:14.873970    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:15.304399    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:15.345188    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:15.374761    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:15.802587    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:15.845934    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:15.874852    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:16.305906    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:16.350345    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:16.376692    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:16.803921    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:16.848034    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:16.877307    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:17.304223    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:17.350053    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:17.376444    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:17.801921    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:17.847622    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:17.875289    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:18.304086    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:18.346100    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:18.376083    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:18.800341    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:18.844878    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:18.875049    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:19.300606    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:19.347299    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:19.374531    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:19.802065    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:19.845415    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:19.874322    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:20.303193    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:20.347106    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:20.377612    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:20.812544    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:20.845115    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:20.875078    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:21.301997    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:21.351068    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:21.377030    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:21.800715    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:21.845088    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:21.874724    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:22.303649    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:22.347540    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:22.375307    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:22.810149    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:22.846605    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:22.875905    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:23.301225    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:23.346193    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:23.374699    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:23.801499    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:23.848208    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:23.875846    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:24.305182    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:24.346042    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:24.405543    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:24.800536    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:24.845046    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:24.874644    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:25.300942    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:25.347296    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:25.375500    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:25.809080    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:25.847978    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:25.877959    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:26.302351    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:26.405744    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:26.405916    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:26.802535    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:26.846791    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:26.875748    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:27.305522    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:27.344832    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:27.374302    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:27.807953    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:27.851531    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:27.875126    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:28.302202    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:28.346884    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:28.374534    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:28.810615    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:28.847307    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:28.876553    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:29.301102    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:29.401722    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:29.401758    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:29.802962    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:29.853222    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:29.905286    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:30.302203    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:30.348774    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:30.375760    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:30.801984    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:30.847174    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:30.875519    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:31.327171    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:31.346063    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:31.377251    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:31.804156    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:31.905194    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:31.905384    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:32.300781    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:32.347461    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:32.378435    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:32.801659    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:32.845826    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:32.876366    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:33.300944    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:33.344900    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:33.374601    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:33.801663    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:33.846903    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:33.877183    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:34.302048    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:34.345486    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:34.375044    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:34.888671    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:34.888678    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:34.888981    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:35.300807    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:35.346730    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:35.374323    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:27:35.800692    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:35.845098    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:35.875006    9864 kapi.go:107] duration metric: took 1m2.503822478s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1219 02:27:36.305260    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:36.347822    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:36.801026    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:36.845978    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:37.301757    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:37.347326    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:37.808517    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:37.846650    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:38.324768    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:38.348845    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:38.804109    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:38.847760    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:39.301392    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:39.344687    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:39.801766    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:39.909041    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:40.302282    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:40.347232    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:40.802898    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:40.846130    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:41.301622    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:41.345481    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:41.802151    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:41.845846    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:42.301841    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:42.346234    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:42.801169    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:42.845980    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:43.300937    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:43.345566    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:43.801374    9864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:27:43.845107    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:44.300702    9864 kapi.go:107] duration metric: took 1m12.503517645s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1219 02:27:44.345148    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:44.846419    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:45.347621    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:45.849941    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:46.346209    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:46.846327    9864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:27:47.345817    9864 kapi.go:107] duration metric: took 1m11.00362963s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1219 02:27:47.347289    9864 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-959667 cluster.
	I1219 02:27:47.348524    9864 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1219 02:27:47.349610    9864 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1219 02:27:47.350738    9864 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, default-storageclass, storage-provisioner-rancher, inspektor-gadget, registry-creds, nvidia-device-plugin, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1219 02:27:47.351757    9864 addons.go:546] duration metric: took 1m23.603020572s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner cloud-spanner ingress-dns default-storageclass storage-provisioner-rancher inspektor-gadget registry-creds nvidia-device-plugin metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1219 02:27:47.351790    9864 start.go:247] waiting for cluster config update ...
	I1219 02:27:47.351807    9864 start.go:256] writing updated cluster config ...
	I1219 02:27:47.352050    9864 ssh_runner.go:195] Run: rm -f paused
	I1219 02:27:47.358973    9864 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 02:27:47.362005    9864 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7mvw5" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:27:47.366495    9864 pod_ready.go:94] pod "coredns-66bc5c9577-7mvw5" is "Ready"
	I1219 02:27:47.366559    9864 pod_ready.go:86] duration metric: took 4.494054ms for pod "coredns-66bc5c9577-7mvw5" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:27:47.368613    9864 pod_ready.go:83] waiting for pod "etcd-addons-959667" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:27:47.373418    9864 pod_ready.go:94] pod "etcd-addons-959667" is "Ready"
	I1219 02:27:47.373442    9864 pod_ready.go:86] duration metric: took 4.808809ms for pod "etcd-addons-959667" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:27:47.375653    9864 pod_ready.go:83] waiting for pod "kube-apiserver-addons-959667" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:27:47.380432    9864 pod_ready.go:94] pod "kube-apiserver-addons-959667" is "Ready"
	I1219 02:27:47.380452    9864 pod_ready.go:86] duration metric: took 4.777274ms for pod "kube-apiserver-addons-959667" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:27:47.382954    9864 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-959667" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:27:47.763199    9864 pod_ready.go:94] pod "kube-controller-manager-addons-959667" is "Ready"
	I1219 02:27:47.763223    9864 pod_ready.go:86] duration metric: took 380.249431ms for pod "kube-controller-manager-addons-959667" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:27:47.964295    9864 pod_ready.go:83] waiting for pod "kube-proxy-rn72z" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:27:48.363520    9864 pod_ready.go:94] pod "kube-proxy-rn72z" is "Ready"
	I1219 02:27:48.363542    9864 pod_ready.go:86] duration metric: took 399.224611ms for pod "kube-proxy-rn72z" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:27:48.562906    9864 pod_ready.go:83] waiting for pod "kube-scheduler-addons-959667" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:27:48.963393    9864 pod_ready.go:94] pod "kube-scheduler-addons-959667" is "Ready"
	I1219 02:27:48.963418    9864 pod_ready.go:86] duration metric: took 400.48898ms for pod "kube-scheduler-addons-959667" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:27:48.963428    9864 pod_ready.go:40] duration metric: took 1.604428188s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 02:27:49.004294    9864 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 02:27:49.005779    9864 out.go:179] * Done! kubectl is now configured to use "addons-959667" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.355653433Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d52fc976-daad-4ece-9c74-235be99c6768 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.355772713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d52fc976-daad-4ece-9c74-235be99c6768 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.356620516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0dd7f98f906b69d21a969d55cae8066f274005365a6a20a7d209f5b6270a374,PodSandboxId:52cdd625639437b6b104f10bacae393895639521535fca05efb621111d8b5a6b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766111305612629977,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: adfec46a-88fb-45a3-a47c-4b9e6b5a439b,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6377ab8218b4ff42549d2a9b144892ec61558de7eb2c2dfb8e896ebbdcfda587,PodSandboxId:0da791688e6ede865e08a846100d2175d1f1eabe642f7a534e138af1df583d8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766111273124897440,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1db5ca4f-1d15-4ebb-b546-a808b3122492,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7e2ab179be66a75cd88db3192813ab417a61ef5761873a7622b1a48b4ea052,PodSandboxId:7cbd157787c6a4b86ebe20147812e3d2b5a04e252fa206f4ad36b832686bc038,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1766111263064720605,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kjr2x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1e7819f7-430d-4fe2-ab7b-e359c036f6e5,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:640fc0aaec1755429adad4dfe316a441764f654ca89b653f5d7eded98457871c,PodSandboxId:1337bec495a7486b6d37ef2735dd1b4487ba472909f2c3e8c72d17e68fde0e05,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766111246262084106,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-n2kc5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50f4eb7e-b566-49e7-a548-7d28e233e965,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed94298812e659f7c6301c14755231e621b2e73a22ce3fe0b8f733925aeef294,PodSandboxId:0158ac50021e4a0d7a0a148857a35e6289e45c0265480e5001fc68b437949b84,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766111243087019823,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nwfnb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5641aab4-201e-45fc-b1e4-e41ba721e3a2,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb4f21ce47ee59d2d5b22e50d3cda6e090990fb5ae1486c1253b12878d960b1b,PodSandboxId:bbb85407ed64e50b3bf98e3052114f1c6fd32c03cce81802dfe2727f90c86271,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1766111210681727034,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ff76ffc-ec67-4423-9e5f-247c6c467e65,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af29dec50706a06b92b92cbced4875c18c8d6624ad9d851221a0ddd1ac07dce,PodSandboxId:bb2b9773b463ed0a5cb7fd08151885939d94055f6ae341c593fd1f70a93bb56a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1766111194256262113,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ndblc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e98e31-befb-48ad-b245-22c725d997a8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3078f62ed2e8a4aa4506caac3b02f7379c293aa8b2317a1fcfd0239787160f5,PodSandboxId:3143c424efb4bcb2d1dbca10b3b1aa3e3a0c7d2c9f81597c93aa4b056f07a99d,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766111193990064944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e1d437-75f0-405a-9f77-e0fccbf8ac17,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f352adcb57b4458acd68079e6c8959a74042df77f67c5a0839c3fa05161945d0,PodSandboxId:a684d6923f108154775ce3dc2fb99550dabf7ee6bf813f3750fe93451a61d2eb,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766111185328028548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7mvw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 955a34ca-2e9f-4581-b200-58587c45d418,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ac645f3b8b58ae3c8cba1ddd8da39850636a7dca2a7cef3962a9377c8b5ad3,PodSandboxId:f8bd7dd16002e5314c8c73df5f311bd74b2f1310f61e8a74807cc0ebd7035f24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766111184355591713,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rn72z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0643a592-a08b-4feb-a8af-ff6845f08be6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b172f2f068e2fdab57c063efca8ba9d6275d4dd64e8782bcb0805f53a16fb565,PodSandboxId:cefefe3216ba29ec3c52a94b759a0b6bb820a86172d153098370d4e673f2dc49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766111173617348313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24cd760bc9389ce67b4bb7e6badfc433,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":102
57,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89789a833497c8fb35ec731ea181a206226766bbe96c86c9017127244ba52185,PodSandboxId:ea7fffb4b2ad6fe7bad5ae523e5cd34729aed21a0395fe40fea17f3b95025059,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766111173214001618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-959667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ea32b24a22095e73231267d305e225,},Annotations:map[string]string{io.kubernetes.container.hash:
79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60a506fa525e9f46d81a6ceceabffc06fa664db93cd84d681595342ed55075c,PodSandboxId:d2a8fcbb2ec516e5b44e0b4ce92aab97f17a79a1f1bd312c0265520dbb27483f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766111172970593494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-959667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94c74054703d00a5b8a9c
b634a4aae9b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13cbb07eef887036ab079b72dc10d3adbafd13d77f46c0f7b2998ed042ceea05,PodSandboxId:d7c9599a0689ae3e2fc804d9670e1bf6ef045403a60186b77428f54fc14e5c3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766111172814512787,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-addons-959667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 760fdf93d445d5888cc45272bb92887c,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d52fc976-daad-4ece-9c74-235be99c6768 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.360339778Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.387515155Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dac056c6-acd4-4165-a0e4-bdfe32a33919 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.387584629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dac056c6-acd4-4165-a0e4-bdfe32a33919 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.389075818Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3c5065f0-d172-4f42-81c4-29fbfa3246d2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.390237891Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766111446390215487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551110,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c5065f0-d172-4f42-81c4-29fbfa3246d2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.391384845Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cfcacde-9d3f-4dea-a1c0-0685e838b3ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.391580873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cfcacde-9d3f-4dea-a1c0-0685e838b3ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.392196028Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0dd7f98f906b69d21a969d55cae8066f274005365a6a20a7d209f5b6270a374,PodSandboxId:52cdd625639437b6b104f10bacae393895639521535fca05efb621111d8b5a6b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766111305612629977,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: adfec46a-88fb-45a3-a47c-4b9e6b5a439b,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6377ab8218b4ff42549d2a9b144892ec61558de7eb2c2dfb8e896ebbdcfda587,PodSandboxId:0da791688e6ede865e08a846100d2175d1f1eabe642f7a534e138af1df583d8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766111273124897440,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1db5ca4f-1d15-4ebb-b546-a808b3122492,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7e2ab179be66a75cd88db3192813ab417a61ef5761873a7622b1a48b4ea052,PodSandboxId:7cbd157787c6a4b86ebe20147812e3d2b5a04e252fa206f4ad36b832686bc038,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1766111263064720605,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kjr2x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1e7819f7-430d-4fe2-ab7b-e359c036f6e5,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:640fc0aaec1755429adad4dfe316a441764f654ca89b653f5d7eded98457871c,PodSandboxId:1337bec495a7486b6d37ef2735dd1b4487ba472909f2c3e8c72d17e68fde0e05,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766111246262084106,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-n2kc5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50f4eb7e-b566-49e7-a548-7d28e233e965,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed94298812e659f7c6301c14755231e621b2e73a22ce3fe0b8f733925aeef294,PodSandboxId:0158ac50021e4a0d7a0a148857a35e6289e45c0265480e5001fc68b437949b84,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766111243087019823,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nwfnb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5641aab4-201e-45fc-b1e4-e41ba721e3a2,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb4f21ce47ee59d2d5b22e50d3cda6e090990fb5ae1486c1253b12878d960b1b,PodSandboxId:bbb85407ed64e50b3bf98e3052114f1c6fd32c03cce81802dfe2727f90c86271,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1766111210681727034,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ff76ffc-ec67-4423-9e5f-247c6c467e65,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af29dec50706a06b92b92cbced4875c18c8d6624ad9d851221a0ddd1ac07dce,PodSandboxId:bb2b9773b463ed0a5cb7fd08151885939d94055f6ae341c593fd1f70a93bb56a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1766111194256262113,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ndblc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e98e31-befb-48ad-b245-22c725d997a8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3078f62ed2e8a4aa4506caac3b02f7379c293aa8b2317a1fcfd0239787160f5,PodSandboxId:3143c424efb4bcb2d1dbca10b3b1aa3e3a0c7d2c9f81597c93aa4b056f07a99d,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766111193990064944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e1d437-75f0-405a-9f77-e0fccbf8ac17,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f352adcb57b4458acd68079e6c8959a74042df77f67c5a0839c3fa05161945d0,PodSandboxId:a684d6923f108154775ce3dc2fb99550dabf7ee6bf813f3750fe93451a61d2eb,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766111185328028548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7mvw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 955a34ca-2e9f-4581-b200-58587c45d418,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ac645f3b8b58ae3c8cba1ddd8da39850636a7dca2a7cef3962a9377c8b5ad3,PodSandboxId:f8bd7dd16002e5314c8c73df5f311bd74b2f1310f61e8a74807cc0ebd7035f24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766111184355591713,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rn72z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0643a592-a08b-4feb-a8af-ff6845f08be6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b172f2f068e2fdab57c063efca8ba9d6275d4dd64e8782bcb0805f53a16fb565,PodSandboxId:cefefe3216ba29ec3c52a94b759a0b6bb820a86172d153098370d4e673f2dc49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766111173617348313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24cd760bc9389ce67b4bb7e6badfc433,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":102
57,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89789a833497c8fb35ec731ea181a206226766bbe96c86c9017127244ba52185,PodSandboxId:ea7fffb4b2ad6fe7bad5ae523e5cd34729aed21a0395fe40fea17f3b95025059,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766111173214001618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-959667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ea32b24a22095e73231267d305e225,},Annotations:map[string]string{io.kubernetes.container.hash:
79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60a506fa525e9f46d81a6ceceabffc06fa664db93cd84d681595342ed55075c,PodSandboxId:d2a8fcbb2ec516e5b44e0b4ce92aab97f17a79a1f1bd312c0265520dbb27483f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766111172970593494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-959667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94c74054703d00a5b8a9c
b634a4aae9b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13cbb07eef887036ab079b72dc10d3adbafd13d77f46c0f7b2998ed042ceea05,PodSandboxId:d7c9599a0689ae3e2fc804d9670e1bf6ef045403a60186b77428f54fc14e5c3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766111172814512787,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-addons-959667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 760fdf93d445d5888cc45272bb92887c,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cfcacde-9d3f-4dea-a1c0-0685e838b3ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.422915760Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d871b8b-371e-4845-ae19-8b8fd06da93e name=/runtime.v1.RuntimeService/Version
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.423220888Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d871b8b-371e-4845-ae19-8b8fd06da93e name=/runtime.v1.RuntimeService/Version
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.424793127Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a7f15d3-8a4a-4b16-8163-de715a0fc29b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.426197269Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766111446426173716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551110,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a7f15d3-8a4a-4b16-8163-de715a0fc29b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.427058245Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b06e94f0-3459-4421-9b36-c5bec8c1da52 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.427122327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b06e94f0-3459-4421-9b36-c5bec8c1da52 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.427482944Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0dd7f98f906b69d21a969d55cae8066f274005365a6a20a7d209f5b6270a374,PodSandboxId:52cdd625639437b6b104f10bacae393895639521535fca05efb621111d8b5a6b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766111305612629977,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: adfec46a-88fb-45a3-a47c-4b9e6b5a439b,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6377ab8218b4ff42549d2a9b144892ec61558de7eb2c2dfb8e896ebbdcfda587,PodSandboxId:0da791688e6ede865e08a846100d2175d1f1eabe642f7a534e138af1df583d8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766111273124897440,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1db5ca4f-1d15-4ebb-b546-a808b3122492,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7e2ab179be66a75cd88db3192813ab417a61ef5761873a7622b1a48b4ea052,PodSandboxId:7cbd157787c6a4b86ebe20147812e3d2b5a04e252fa206f4ad36b832686bc038,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1766111263064720605,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kjr2x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1e7819f7-430d-4fe2-ab7b-e359c036f6e5,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:640fc0aaec1755429adad4dfe316a441764f654ca89b653f5d7eded98457871c,PodSandboxId:1337bec495a7486b6d37ef2735dd1b4487ba472909f2c3e8c72d17e68fde0e05,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766111246262084106,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-n2kc5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50f4eb7e-b566-49e7-a548-7d28e233e965,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed94298812e659f7c6301c14755231e621b2e73a22ce3fe0b8f733925aeef294,PodSandboxId:0158ac50021e4a0d7a0a148857a35e6289e45c0265480e5001fc68b437949b84,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766111243087019823,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nwfnb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5641aab4-201e-45fc-b1e4-e41ba721e3a2,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb4f21ce47ee59d2d5b22e50d3cda6e090990fb5ae1486c1253b12878d960b1b,PodSandboxId:bbb85407ed64e50b3bf98e3052114f1c6fd32c03cce81802dfe2727f90c86271,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1766111210681727034,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ff76ffc-ec67-4423-9e5f-247c6c467e65,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af29dec50706a06b92b92cbced4875c18c8d6624ad9d851221a0ddd1ac07dce,PodSandboxId:bb2b9773b463ed0a5cb7fd08151885939d94055f6ae341c593fd1f70a93bb56a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1766111194256262113,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ndblc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e98e31-befb-48ad-b245-22c725d997a8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3078f62ed2e8a4aa4506caac3b02f7379c293aa8b2317a1fcfd0239787160f5,PodSandboxId:3143c424efb4bcb2d1dbca10b3b1aa3e3a0c7d2c9f81597c93aa4b056f07a99d,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766111193990064944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e1d437-75f0-405a-9f77-e0fccbf8ac17,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f352adcb57b4458acd68079e6c8959a74042df77f67c5a0839c3fa05161945d0,PodSandboxId:a684d6923f108154775ce3dc2fb99550dabf7ee6bf813f3750fe93451a61d2eb,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766111185328028548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7mvw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 955a34ca-2e9f-4581-b200-58587c45d418,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ac645f3b8b58ae3c8cba1ddd8da39850636a7dca2a7cef3962a9377c8b5ad3,PodSandboxId:f8bd7dd16002e5314c8c73df5f311bd74b2f1310f61e8a74807cc0ebd7035f24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766111184355591713,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rn72z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0643a592-a08b-4feb-a8af-ff6845f08be6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b172f2f068e2fdab57c063efca8ba9d6275d4dd64e8782bcb0805f53a16fb565,PodSandboxId:cefefe3216ba29ec3c52a94b759a0b6bb820a86172d153098370d4e673f2dc49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766111173617348313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24cd760bc9389ce67b4bb7e6badfc433,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":102
57,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89789a833497c8fb35ec731ea181a206226766bbe96c86c9017127244ba52185,PodSandboxId:ea7fffb4b2ad6fe7bad5ae523e5cd34729aed21a0395fe40fea17f3b95025059,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766111173214001618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-959667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ea32b24a22095e73231267d305e225,},Annotations:map[string]string{io.kubernetes.container.hash:
79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60a506fa525e9f46d81a6ceceabffc06fa664db93cd84d681595342ed55075c,PodSandboxId:d2a8fcbb2ec516e5b44e0b4ce92aab97f17a79a1f1bd312c0265520dbb27483f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766111172970593494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-959667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94c74054703d00a5b8a9c
b634a4aae9b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13cbb07eef887036ab079b72dc10d3adbafd13d77f46c0f7b2998ed042ceea05,PodSandboxId:d7c9599a0689ae3e2fc804d9670e1bf6ef045403a60186b77428f54fc14e5c3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766111172814512787,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-addons-959667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 760fdf93d445d5888cc45272bb92887c,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b06e94f0-3459-4421-9b36-c5bec8c1da52 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.455772487Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0053751b-5cba-4461-8879-789cfc7f201a name=/runtime.v1.RuntimeService/Version
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.455906628Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0053751b-5cba-4461-8879-789cfc7f201a name=/runtime.v1.RuntimeService/Version
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.457571648Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c5d8325-3167-44f3-aeb0-e1c2e2353503 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.458767379Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766111446458743490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551110,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c5d8325-3167-44f3-aeb0-e1c2e2353503 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.459567248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77a24524-1ee9-4a1c-9c0e-6ec196032c51 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.459710117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77a24524-1ee9-4a1c-9c0e-6ec196032c51 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:30:46 addons-959667 crio[817]: time="2025-12-19 02:30:46.460004185Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0dd7f98f906b69d21a969d55cae8066f274005365a6a20a7d209f5b6270a374,PodSandboxId:52cdd625639437b6b104f10bacae393895639521535fca05efb621111d8b5a6b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766111305612629977,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: adfec46a-88fb-45a3-a47c-4b9e6b5a439b,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6377ab8218b4ff42549d2a9b144892ec61558de7eb2c2dfb8e896ebbdcfda587,PodSandboxId:0da791688e6ede865e08a846100d2175d1f1eabe642f7a534e138af1df583d8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766111273124897440,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1db5ca4f-1d15-4ebb-b546-a808b3122492,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7e2ab179be66a75cd88db3192813ab417a61ef5761873a7622b1a48b4ea052,PodSandboxId:7cbd157787c6a4b86ebe20147812e3d2b5a04e252fa206f4ad36b832686bc038,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1766111263064720605,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kjr2x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1e7819f7-430d-4fe2-ab7b-e359c036f6e5,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:640fc0aaec1755429adad4dfe316a441764f654ca89b653f5d7eded98457871c,PodSandboxId:1337bec495a7486b6d37ef2735dd1b4487ba472909f2c3e8c72d17e68fde0e05,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766111246262084106,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-n2kc5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50f4eb7e-b566-49e7-a548-7d28e233e965,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed94298812e659f7c6301c14755231e621b2e73a22ce3fe0b8f733925aeef294,PodSandboxId:0158ac50021e4a0d7a0a148857a35e6289e45c0265480e5001fc68b437949b84,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766111243087019823,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nwfnb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5641aab4-201e-45fc-b1e4-e41ba721e3a2,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb4f21ce47ee59d2d5b22e50d3cda6e090990fb5ae1486c1253b12878d960b1b,PodSandboxId:bbb85407ed64e50b3bf98e3052114f1c6fd32c03cce81802dfe2727f90c86271,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1766111210681727034,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ff76ffc-ec67-4423-9e5f-247c6c467e65,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af29dec50706a06b92b92cbced4875c18c8d6624ad9d851221a0ddd1ac07dce,PodSandboxId:bb2b9773b463ed0a5cb7fd08151885939d94055f6ae341c593fd1f70a93bb56a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1766111194256262113,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ndblc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e98e31-befb-48ad-b245-22c725d997a8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3078f62ed2e8a4aa4506caac3b02f7379c293aa8b2317a1fcfd0239787160f5,PodSandboxId:3143c424efb4bcb2d1dbca10b3b1aa3e3a0c7d2c9f81597c93aa4b056f07a99d,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766111193990064944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e1d437-75f0-405a-9f77-e0fccbf8ac17,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f352adcb57b4458acd68079e6c8959a74042df77f67c5a0839c3fa05161945d0,PodSandboxId:a684d6923f108154775ce3dc2fb99550dabf7ee6bf813f3750fe93451a61d2eb,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766111185328028548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7mvw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 955a34ca-2e9f-4581-b200-58587c45d418,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ac645f3b8b58ae3c8cba1ddd8da39850636a7dca2a7cef3962a9377c8b5ad3,PodSandboxId:f8bd7dd16002e5314c8c73df5f311bd74b2f1310f61e8a74807cc0ebd7035f24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766111184355591713,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rn72z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0643a592-a08b-4feb-a8af-ff6845f08be6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b172f2f068e2fdab57c063efca8ba9d6275d4dd64e8782bcb0805f53a16fb565,PodSandboxId:cefefe3216ba29ec3c52a94b759a0b6bb820a86172d153098370d4e673f2dc49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766111173617348313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24cd760bc9389ce67b4bb7e6badfc433,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":102
57,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89789a833497c8fb35ec731ea181a206226766bbe96c86c9017127244ba52185,PodSandboxId:ea7fffb4b2ad6fe7bad5ae523e5cd34729aed21a0395fe40fea17f3b95025059,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766111173214001618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-959667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ea32b24a22095e73231267d305e225,},Annotations:map[string]string{io.kubernetes.container.hash:
79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60a506fa525e9f46d81a6ceceabffc06fa664db93cd84d681595342ed55075c,PodSandboxId:d2a8fcbb2ec516e5b44e0b4ce92aab97f17a79a1f1bd312c0265520dbb27483f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766111172970593494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-959667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94c74054703d00a5b8a9c
b634a4aae9b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13cbb07eef887036ab079b72dc10d3adbafd13d77f46c0f7b2998ed042ceea05,PodSandboxId:d7c9599a0689ae3e2fc804d9670e1bf6ef045403a60186b77428f54fc14e5c3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766111172814512787,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-addons-959667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 760fdf93d445d5888cc45272bb92887c,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77a24524-1ee9-4a1c-9c0e-6ec196032c51 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c0dd7f98f906b       public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c                           2 minutes ago       Running             nginx                     0                   52cdd62563943       nginx                                       default
	6377ab8218b4f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   0da791688e6ed       busybox                                     default
	0a7e2ab179be6       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             3 minutes ago       Running             controller                0                   7cbd157787c6a       ingress-nginx-controller-85d4c799dd-kjr2x   ingress-nginx
	640fc0aaec175       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              patch                     0                   1337bec495a74       ingress-nginx-admission-patch-n2kc5         ingress-nginx
	ed94298812e65       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              create                    0                   0158ac50021e4       ingress-nginx-admission-create-nwfnb        ingress-nginx
	eb4f21ce47ee5       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   bbb85407ed64e       kube-ingress-dns-minikube                   kube-system
	2af29dec50706       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   bb2b9773b463e       amd-gpu-device-plugin-ndblc                 kube-system
	b3078f62ed2e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   3143c424efb4b       storage-provisioner                         kube-system
	f352adcb57b44       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   a684d6923f108       coredns-66bc5c9577-7mvw5                    kube-system
	a1ac645f3b8b5       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                             4 minutes ago       Running             kube-proxy                0                   f8bd7dd16002e       kube-proxy-rn72z                            kube-system
	b172f2f068e2f       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                             4 minutes ago       Running             kube-controller-manager   0                   cefefe3216ba2       kube-controller-manager-addons-959667       kube-system
	89789a833497c       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                             4 minutes ago       Running             kube-apiserver            0                   ea7fffb4b2ad6       kube-apiserver-addons-959667                kube-system
	e60a506fa525e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   d2a8fcbb2ec51       etcd-addons-959667                          kube-system
	13cbb07eef887       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                             4 minutes ago       Running             kube-scheduler            0                   d7c9599a0689a       kube-scheduler-addons-959667                kube-system
	
	
	==> coredns [f352adcb57b4458acd68079e6c8959a74042df77f67c5a0839c3fa05161945d0] <==
	[INFO] 10.244.0.8:56596 - 33988 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000186974s
	[INFO] 10.244.0.8:56596 - 9886 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00007933s
	[INFO] 10.244.0.8:56596 - 41398 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00008127s
	[INFO] 10.244.0.8:56596 - 2246 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000171573s
	[INFO] 10.244.0.8:56596 - 65124 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000069427s
	[INFO] 10.244.0.8:56596 - 26720 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000092017s
	[INFO] 10.244.0.8:56596 - 14576 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000272847s
	[INFO] 10.244.0.8:41101 - 9590 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000098841s
	[INFO] 10.244.0.8:41101 - 9873 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000122785s
	[INFO] 10.244.0.8:42930 - 5803 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074571s
	[INFO] 10.244.0.8:42930 - 5476 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000481109s
	[INFO] 10.244.0.8:45592 - 6573 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009053s
	[INFO] 10.244.0.8:45592 - 6822 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000106312s
	[INFO] 10.244.0.8:60151 - 4675 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000085517s
	[INFO] 10.244.0.8:60151 - 4429 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000305917s
	[INFO] 10.244.0.23:51822 - 40951 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000348352s
	[INFO] 10.244.0.23:41468 - 26328 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001385205s
	[INFO] 10.244.0.23:35893 - 19265 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000162171s
	[INFO] 10.244.0.23:51499 - 2584 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00014329s
	[INFO] 10.244.0.23:42972 - 43294 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000107584s
	[INFO] 10.244.0.23:38266 - 45527 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085847s
	[INFO] 10.244.0.23:37586 - 55217 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001291546s
	[INFO] 10.244.0.23:59244 - 2070 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.001595635s
	[INFO] 10.244.0.27:52742 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000494036s
	[INFO] 10.244.0.27:33040 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000277417s
	
	
	==> describe nodes <==
	Name:               addons-959667
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-959667
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=addons-959667
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T02_26_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-959667
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 02:26:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-959667
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 02:30:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 02:28:52 +0000   Fri, 19 Dec 2025 02:26:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 02:28:52 +0000   Fri, 19 Dec 2025 02:26:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 02:28:52 +0000   Fri, 19 Dec 2025 02:26:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 02:28:52 +0000   Fri, 19 Dec 2025 02:26:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.204
	  Hostname:    addons-959667
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec9b0108611c411bb1074f85b7cff5e9
	  System UUID:                ec9b0108-611c-411b-b107-4f85b7cff5e9
	  Boot ID:                    98e8da46-6c7d-44c7-8642-8ca495509473
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m57s
	  default                     hello-world-app-5d498dc89-5vfcf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-kjr2x    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m15s
	  kube-system                 amd-gpu-device-plugin-ndblc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 coredns-66bc5c9577-7mvw5                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m22s
	  kube-system                 etcd-addons-959667                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m29s
	  kube-system                 kube-apiserver-addons-959667                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 kube-controller-manager-addons-959667        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-rn72z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-scheduler-addons-959667                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m21s  kube-proxy       
	  Normal  Starting                 4m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m28s  kubelet          Node addons-959667 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s  kubelet          Node addons-959667 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s  kubelet          Node addons-959667 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m27s  kubelet          Node addons-959667 status is now: NodeReady
	  Normal  RegisteredNode           4m24s  node-controller  Node addons-959667 event: Registered Node addons-959667 in Controller
	
	
	==> dmesg <==
	[  +9.204899] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.773281] kauditd_printk_skb: 5 callbacks suppressed
	[Dec19 02:27] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.054794] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.136967] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.069035] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.989035] kauditd_printk_skb: 155 callbacks suppressed
	[  +1.103675] kauditd_printk_skb: 142 callbacks suppressed
	[  +0.141754] kauditd_printk_skb: 71 callbacks suppressed
	[  +4.377567] kauditd_printk_skb: 44 callbacks suppressed
	[  +0.000227] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.288900] kauditd_printk_skb: 41 callbacks suppressed
	[Dec19 02:28] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.836612] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.825419] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.664812] kauditd_printk_skb: 141 callbacks suppressed
	[  +0.806526] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.092781] kauditd_printk_skb: 34 callbacks suppressed
	[  +3.537993] kauditd_printk_skb: 141 callbacks suppressed
	[  +4.717116] kauditd_printk_skb: 61 callbacks suppressed
	[  +0.000042] kauditd_printk_skb: 113 callbacks suppressed
	[  +7.012341] kauditd_printk_skb: 41 callbacks suppressed
	[Dec19 02:29] kauditd_printk_skb: 10 callbacks suppressed
	[  +3.666626] kauditd_printk_skb: 61 callbacks suppressed
	[Dec19 02:30] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [e60a506fa525e9f46d81a6ceceabffc06fa664db93cd84d681595342ed55075c] <==
	{"level":"info","ts":"2025-12-19T02:26:49.080268Z","caller":"traceutil/trace.go:172","msg":"trace[1827988257] linearizableReadLoop","detail":"{readStateIndex:921; appliedIndex:921; }","duration":"113.156492ms","start":"2025-12-19T02:26:48.967095Z","end":"2025-12-19T02:26:49.080252Z","steps":["trace[1827988257] 'read index received'  (duration: 113.149637ms)","trace[1827988257] 'applied index is now lower than readState.Index'  (duration: 6.028µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T02:26:49.080395Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.283534ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:26:49.080414Z","caller":"traceutil/trace.go:172","msg":"trace[670115384] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:902; }","duration":"113.318617ms","start":"2025-12-19T02:26:48.967089Z","end":"2025-12-19T02:26:49.080408Z","steps":["trace[670115384] 'agreement among raft nodes before linearized reading'  (duration: 113.262161ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:26:49.080952Z","caller":"traceutil/trace.go:172","msg":"trace[664509748] transaction","detail":"{read_only:false; response_revision:903; number_of_response:1; }","duration":"136.264594ms","start":"2025-12-19T02:26:48.944677Z","end":"2025-12-19T02:26:49.080941Z","steps":["trace[664509748] 'process raft request'  (duration: 136.031732ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:26:50.583504Z","caller":"traceutil/trace.go:172","msg":"trace[87479314] transaction","detail":"{read_only:false; response_revision:905; number_of_response:1; }","duration":"116.533319ms","start":"2025-12-19T02:26:50.466959Z","end":"2025-12-19T02:26:50.583492Z","steps":["trace[87479314] 'process raft request'  (duration: 116.423147ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:26:52.841535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:26:52.857909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:26:52.888763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:26:52.906708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57718","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T02:27:01.325737Z","caller":"traceutil/trace.go:172","msg":"trace[20396824] transaction","detail":"{read_only:false; response_revision:940; number_of_response:1; }","duration":"173.794742ms","start":"2025-12-19T02:27:01.151930Z","end":"2025-12-19T02:27:01.325725Z","steps":["trace[20396824] 'process raft request'  (duration: 173.467204ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:27:07.830254Z","caller":"traceutil/trace.go:172","msg":"trace[1285917892] transaction","detail":"{read_only:false; response_revision:959; number_of_response:1; }","duration":"184.216882ms","start":"2025-12-19T02:27:07.646026Z","end":"2025-12-19T02:27:07.830242Z","steps":["trace[1285917892] 'process raft request'  (duration: 183.37576ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:27:07.830445Z","caller":"traceutil/trace.go:172","msg":"trace[1663859456] transaction","detail":"{read_only:false; response_revision:960; number_of_response:1; }","duration":"180.443554ms","start":"2025-12-19T02:27:07.649991Z","end":"2025-12-19T02:27:07.830434Z","steps":["trace[1663859456] 'process raft request'  (duration: 180.282507ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:27:18.089212Z","caller":"traceutil/trace.go:172","msg":"trace[134702341] linearizableReadLoop","detail":"{readStateIndex:1018; appliedIndex:1018; }","duration":"122.066488ms","start":"2025-12-19T02:27:17.967070Z","end":"2025-12-19T02:27:18.089137Z","steps":["trace[134702341] 'read index received'  (duration: 122.063009ms)","trace[134702341] 'applied index is now lower than readState.Index'  (duration: 2.898µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T02:27:18.089333Z","caller":"traceutil/trace.go:172","msg":"trace[1841016260] transaction","detail":"{read_only:false; response_revision:994; number_of_response:1; }","duration":"188.481478ms","start":"2025-12-19T02:27:17.900841Z","end":"2025-12-19T02:27:18.089323Z","steps":["trace[1841016260] 'process raft request'  (duration: 188.345743ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:27:18.089511Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.464601ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:27:18.090864Z","caller":"traceutil/trace.go:172","msg":"trace[1871803573] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:994; }","duration":"123.833404ms","start":"2025-12-19T02:27:17.967022Z","end":"2025-12-19T02:27:18.090855Z","steps":["trace[1871803573] 'agreement among raft nodes before linearized reading'  (duration: 122.450599ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:27:27.783491Z","caller":"traceutil/trace.go:172","msg":"trace[384602802] linearizableReadLoop","detail":"{readStateIndex:1078; appliedIndex:1078; }","duration":"247.947481ms","start":"2025-12-19T02:27:27.535431Z","end":"2025-12-19T02:27:27.783379Z","steps":["trace[384602802] 'read index received'  (duration: 247.940399ms)","trace[384602802] 'applied index is now lower than readState.Index'  (duration: 6.134µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T02:27:27.784018Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"248.563161ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.204\" limit:1 ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-12-19T02:27:27.784102Z","caller":"traceutil/trace.go:172","msg":"trace[1386038140] range","detail":"{range_begin:/registry/masterleases/192.168.39.204; range_end:; response_count:1; response_revision:1051; }","duration":"248.682123ms","start":"2025-12-19T02:27:27.535405Z","end":"2025-12-19T02:27:27.784087Z","steps":["trace[1386038140] 'agreement among raft nodes before linearized reading'  (duration: 248.428167ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:27:27.784600Z","caller":"traceutil/trace.go:172","msg":"trace[1332791720] transaction","detail":"{read_only:false; response_revision:1052; number_of_response:1; }","duration":"311.948252ms","start":"2025-12-19T02:27:27.472641Z","end":"2025-12-19T02:27:27.784589Z","steps":["trace[1332791720] 'process raft request'  (duration: 310.969656ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:27:27.785180Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T02:27:27.472617Z","time spent":"312.042889ms","remote":"127.0.0.1:40608","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4224,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" mod_revision:682 > success:<request_put:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" value_size:4158 >> failure:<request_range:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" > >"}
	{"level":"info","ts":"2025-12-19T02:27:27.791590Z","caller":"traceutil/trace.go:172","msg":"trace[505944798] transaction","detail":"{read_only:false; response_revision:1053; number_of_response:1; }","duration":"309.64446ms","start":"2025-12-19T02:27:27.481935Z","end":"2025-12-19T02:27:27.791580Z","steps":["trace[505944798] 'process raft request'  (duration: 309.474038ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:27:27.791652Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T02:27:27.481910Z","time spent":"309.708307ms","remote":"127.0.0.1:40554","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4617,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-n2kc5\" mod_revision:1048 > success:<request_put:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-n2kc5\" value_size:4545 >> failure:<request_range:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-n2kc5\" > >"}
	{"level":"info","ts":"2025-12-19T02:27:41.532336Z","caller":"traceutil/trace.go:172","msg":"trace[1866890289] transaction","detail":"{read_only:false; response_revision:1133; number_of_response:1; }","duration":"164.510327ms","start":"2025-12-19T02:27:41.367771Z","end":"2025-12-19T02:27:41.532281Z","steps":["trace[1866890289] 'process raft request'  (duration: 164.391559ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:28:25.504707Z","caller":"traceutil/trace.go:172","msg":"trace[1317889077] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1423; }","duration":"120.547797ms","start":"2025-12-19T02:28:25.384146Z","end":"2025-12-19T02:28:25.504693Z","steps":["trace[1317889077] 'process raft request'  (duration: 120.452223ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:30:46 up 5 min,  0 users,  load average: 1.38, 1.55, 0.75
	Linux addons-959667 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [89789a833497c8fb35ec731ea181a206226766bbe96c86c9017127244ba52185] <==
	E1219 02:27:13.306614       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.246.94:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.246.94:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.246.94:443: connect: connection refused" logger="UnhandledError"
	E1219 02:27:13.327211       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.246.94:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.246.94:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.246.94:443: connect: connection refused" logger="UnhandledError"
	I1219 02:27:13.406463       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1219 02:28:00.748042       1 conn.go:339] Error on socket receive: read tcp 192.168.39.204:8443->192.168.39.1:33144: use of closed network connection
	E1219 02:28:00.928644       1 conn.go:339] Error on socket receive: read tcp 192.168.39.204:8443->192.168.39.1:33174: use of closed network connection
	I1219 02:28:09.939823       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.153.135"}
	I1219 02:28:15.637506       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1219 02:28:15.832575       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.190.36"}
	I1219 02:28:56.374152       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1219 02:28:59.322784       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1219 02:29:14.318677       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1219 02:29:19.043461       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1219 02:29:19.043578       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1219 02:29:19.064929       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1219 02:29:19.064975       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1219 02:29:19.075429       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1219 02:29:19.075454       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1219 02:29:19.104725       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1219 02:29:19.104769       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1219 02:29:19.111196       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1219 02:29:19.111236       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1219 02:29:20.066573       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1219 02:29:20.111429       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1219 02:29:20.141712       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1219 02:30:45.460886       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.84.141"}
	
	
	==> kube-controller-manager [b172f2f068e2fdab57c063efca8ba9d6275d4dd64e8782bcb0805f53a16fb565] <==
	E1219 02:29:23.464615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1219 02:29:27.151894       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1219 02:29:27.152772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1219 02:29:28.429847       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1219 02:29:28.430838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1219 02:29:29.433580       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1219 02:29:29.434654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1219 02:29:35.323793       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1219 02:29:35.324790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1219 02:29:37.451921       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1219 02:29:37.452909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1219 02:29:40.456227       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1219 02:29:40.457420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1219 02:29:50.749906       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1219 02:29:50.751107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1219 02:29:57.550538       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1219 02:29:57.552008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1219 02:29:59.290537       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1219 02:29:59.291542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1219 02:30:24.096108       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1219 02:30:24.097319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1219 02:30:38.120338       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1219 02:30:38.121428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1219 02:30:45.291269       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1219 02:30:45.292518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [a1ac645f3b8b58ae3c8cba1ddd8da39850636a7dca2a7cef3962a9377c8b5ad3] <==
	I1219 02:26:24.851623       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 02:26:24.952381       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 02:26:24.954427       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.204"]
	E1219 02:26:24.954928       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:26:25.203753       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 02:26:25.203843       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 02:26:25.203871       1 server_linux.go:132] "Using iptables Proxier"
	I1219 02:26:25.219095       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:26:25.219508       1 server.go:527] "Version info" version="v1.34.3"
	I1219 02:26:25.220555       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:26:25.232646       1 config.go:200] "Starting service config controller"
	I1219 02:26:25.232676       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:26:25.232740       1 config.go:309] "Starting node config controller"
	I1219 02:26:25.232744       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:26:25.232748       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:26:25.233131       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:26:25.233138       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:26:25.233149       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:26:25.233152       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:26:25.333382       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:26:25.333422       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:26:25.333463       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [13cbb07eef887036ab079b72dc10d3adbafd13d77f46c0f7b2998ed042ceea05] <==
	E1219 02:26:15.837532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 02:26:15.837578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 02:26:15.837619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 02:26:15.837670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 02:26:15.837713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 02:26:15.837758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 02:26:15.837799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 02:26:15.837874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:26:15.837910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 02:26:15.837972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 02:26:15.838054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 02:26:16.670085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 02:26:16.692616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 02:26:16.711166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:26:16.778979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 02:26:16.782636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 02:26:16.798902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 02:26:16.850698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 02:26:16.918157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 02:26:16.991886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 02:26:17.035192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 02:26:17.084070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 02:26:17.138667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 02:26:17.140823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1219 02:26:18.818638       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 02:29:22 addons-959667 kubelet[1509]: I1219 02:29:22.336705    1509 scope.go:117] "RemoveContainer" containerID="83d09c5da1f0393d85f7ea25ce72730124e5b609772b9ac939ab8ee198fdebf3"
	Dec 19 02:29:22 addons-959667 kubelet[1509]: I1219 02:29:22.337156    1509 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83d09c5da1f0393d85f7ea25ce72730124e5b609772b9ac939ab8ee198fdebf3"} err="failed to get container status \"83d09c5da1f0393d85f7ea25ce72730124e5b609772b9ac939ab8ee198fdebf3\": rpc error: code = NotFound desc = could not find container \"83d09c5da1f0393d85f7ea25ce72730124e5b609772b9ac939ab8ee198fdebf3\": container with ID starting with 83d09c5da1f0393d85f7ea25ce72730124e5b609772b9ac939ab8ee198fdebf3 not found: ID does not exist"
	Dec 19 02:29:22 addons-959667 kubelet[1509]: I1219 02:29:22.337170    1509 scope.go:117] "RemoveContainer" containerID="5c66bae1e359bd51ac944ee34ce5aa2c375b053f74392d8feace042b44f7c543"
	Dec 19 02:29:22 addons-959667 kubelet[1509]: I1219 02:29:22.337597    1509 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c66bae1e359bd51ac944ee34ce5aa2c375b053f74392d8feace042b44f7c543"} err="failed to get container status \"5c66bae1e359bd51ac944ee34ce5aa2c375b053f74392d8feace042b44f7c543\": rpc error: code = NotFound desc = could not find container \"5c66bae1e359bd51ac944ee34ce5aa2c375b053f74392d8feace042b44f7c543\": container with ID starting with 5c66bae1e359bd51ac944ee34ce5aa2c375b053f74392d8feace042b44f7c543 not found: ID does not exist"
	Dec 19 02:29:22 addons-959667 kubelet[1509]: I1219 02:29:22.337628    1509 scope.go:117] "RemoveContainer" containerID="ef772064b997142f6b45c126c62f28d1df650f94cd3c687455407e506c8bd622"
	Dec 19 02:29:22 addons-959667 kubelet[1509]: I1219 02:29:22.338189    1509 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef772064b997142f6b45c126c62f28d1df650f94cd3c687455407e506c8bd622"} err="failed to get container status \"ef772064b997142f6b45c126c62f28d1df650f94cd3c687455407e506c8bd622\": rpc error: code = NotFound desc = could not find container \"ef772064b997142f6b45c126c62f28d1df650f94cd3c687455407e506c8bd622\": container with ID starting with ef772064b997142f6b45c126c62f28d1df650f94cd3c687455407e506c8bd622 not found: ID does not exist"
	Dec 19 02:29:28 addons-959667 kubelet[1509]: E1219 02:29:28.442771    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766111368441816801  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551110}  inodes_used:{value:196}}"
	Dec 19 02:29:28 addons-959667 kubelet[1509]: E1219 02:29:28.443171    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766111368441816801  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551110}  inodes_used:{value:196}}"
	Dec 19 02:29:38 addons-959667 kubelet[1509]: E1219 02:29:38.445158    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766111378444872973  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551110}  inodes_used:{value:196}}"
	Dec 19 02:29:38 addons-959667 kubelet[1509]: E1219 02:29:38.445184    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766111378444872973  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551110}  inodes_used:{value:196}}"
	Dec 19 02:29:48 addons-959667 kubelet[1509]: E1219 02:29:48.447241    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766111388446703175  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551110}  inodes_used:{value:196}}"
	Dec 19 02:29:48 addons-959667 kubelet[1509]: E1219 02:29:48.447275    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766111388446703175  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551110}  inodes_used:{value:196}}"
	Dec 19 02:29:58 addons-959667 kubelet[1509]: E1219 02:29:58.450554    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766111398449495730  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551110}  inodes_used:{value:196}}"
	Dec 19 02:29:58 addons-959667 kubelet[1509]: E1219 02:29:58.450592    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766111398449495730  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551110}  inodes_used:{value:196}}"
	Dec 19 02:30:08 addons-959667 kubelet[1509]: E1219 02:30:08.452893    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766111408452328225  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551110}  inodes_used:{value:196}}"
	Dec 19 02:30:08 addons-959667 kubelet[1509]: E1219 02:30:08.452916    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766111408452328225  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551110}  inodes_used:{value:196}}"
	Dec 19 02:30:18 addons-959667 kubelet[1509]: E1219 02:30:18.457595    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766111418457273092  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551110}  inodes_used:{value:196}}"
	Dec 19 02:30:18 addons-959667 kubelet[1509]: E1219 02:30:18.457634    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766111418457273092  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551110}  inodes_used:{value:196}}"
	Dec 19 02:30:28 addons-959667 kubelet[1509]: E1219 02:30:28.461617    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766111428460978570  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551110}  inodes_used:{value:196}}"
	Dec 19 02:30:28 addons-959667 kubelet[1509]: E1219 02:30:28.461640    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766111428460978570  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551110}  inodes_used:{value:196}}"
	Dec 19 02:30:38 addons-959667 kubelet[1509]: E1219 02:30:38.463681    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766111438463354175  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551110}  inodes_used:{value:196}}"
	Dec 19 02:30:38 addons-959667 kubelet[1509]: E1219 02:30:38.463709    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766111438463354175  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551110}  inodes_used:{value:196}}"
	Dec 19 02:30:40 addons-959667 kubelet[1509]: I1219 02:30:40.265695    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ndblc" secret="" err="secret \"gcp-auth\" not found"
	Dec 19 02:30:42 addons-959667 kubelet[1509]: I1219 02:30:42.265991    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 19 02:30:45 addons-959667 kubelet[1509]: I1219 02:30:45.489781    1509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftnvs\" (UniqueName: \"kubernetes.io/projected/44c57645-9273-4056-a5f5-ac018a80eca7-kube-api-access-ftnvs\") pod \"hello-world-app-5d498dc89-5vfcf\" (UID: \"44c57645-9273-4056-a5f5-ac018a80eca7\") " pod="default/hello-world-app-5d498dc89-5vfcf"
	
	
	==> storage-provisioner [b3078f62ed2e8a4aa4506caac3b02f7379c293aa8b2317a1fcfd0239787160f5] <==
	W1219 02:30:21.240675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:23.243847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:23.252246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:25.256201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:25.262000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:27.264955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:27.272053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:29.274925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:29.279671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:31.282578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:31.287880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:33.291549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:33.296023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:35.299356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:35.306510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:37.309603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:37.314127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:39.317523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:39.324911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:41.328272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:41.336402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:43.340435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:43.345027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:45.351345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:30:45.372407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-959667 -n addons-959667
helpers_test.go:270: (dbg) Run:  kubectl --context addons-959667 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-5vfcf ingress-nginx-admission-create-nwfnb ingress-nginx-admission-patch-n2kc5
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-959667 describe pod hello-world-app-5d498dc89-5vfcf ingress-nginx-admission-create-nwfnb ingress-nginx-admission-patch-n2kc5
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-959667 describe pod hello-world-app-5d498dc89-5vfcf ingress-nginx-admission-create-nwfnb ingress-nginx-admission-patch-n2kc5: exit status 1 (71.710388ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-5vfcf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-959667/192.168.39.204
	Start Time:       Fri, 19 Dec 2025 02:30:45 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ftnvs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ftnvs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-5vfcf to addons-959667
	  Normal  Pulling    2s    kubelet            spec.containers{hello-world-app}: Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-nwfnb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-n2kc5" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-959667 describe pod hello-world-app-5d498dc89-5vfcf ingress-nginx-admission-create-nwfnb ingress-nginx-admission-patch-n2kc5: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-959667 addons disable ingress-dns --alsologtostderr -v=1: (1.42768628s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-959667 addons disable ingress --alsologtostderr -v=1: (7.67128516s)
--- FAIL: TestAddons/parallel/Ingress (161.17s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (301.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-199791 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-199791 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-199791 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-199791 --alsologtostderr -v=1] stderr:
I1219 02:35:47.538866   14667 out.go:360] Setting OutFile to fd 1 ...
I1219 02:35:47.539132   14667 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:35:47.539142   14667 out.go:374] Setting ErrFile to fd 2...
I1219 02:35:47.539146   14667 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:35:47.539357   14667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
I1219 02:35:47.539595   14667 mustload.go:66] Loading cluster: functional-199791
I1219 02:35:47.539945   14667 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:35:47.541625   14667 host.go:66] Checking if "functional-199791" exists ...
I1219 02:35:47.541800   14667 api_server.go:166] Checking apiserver status ...
I1219 02:35:47.541831   14667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1219 02:35:47.543876   14667 main.go:144] libmachine: domain functional-199791 has defined MAC address 52:54:00:47:b9:97 in network mk-functional-199791
I1219 02:35:47.544269   14667 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:47:b9:97", ip: ""} in network mk-functional-199791: {Iface:virbr1 ExpiryTime:2025-12-19 03:33:22 +0000 UTC Type:0 Mac:52:54:00:47:b9:97 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-199791 Clientid:01:52:54:00:47:b9:97}
I1219 02:35:47.544293   14667 main.go:144] libmachine: domain functional-199791 has defined IP address 192.168.39.97 and MAC address 52:54:00:47:b9:97 in network mk-functional-199791
I1219 02:35:47.544426   14667 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/functional-199791/id_rsa Username:docker}
I1219 02:35:47.644409   14667 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6336/cgroup
W1219 02:35:47.656166   14667 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6336/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1219 02:35:47.656222   14667 ssh_runner.go:195] Run: ls
I1219 02:35:47.661008   14667 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8441/healthz ...
I1219 02:35:47.667146   14667 api_server.go:279] https://192.168.39.97:8441/healthz returned 200:
ok
W1219 02:35:47.667190   14667 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1219 02:35:47.667398   14667 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:35:47.667422   14667 addons.go:70] Setting dashboard=true in profile "functional-199791"
I1219 02:35:47.667431   14667 addons.go:239] Setting addon dashboard=true in "functional-199791"
I1219 02:35:47.667475   14667 host.go:66] Checking if "functional-199791" exists ...
I1219 02:35:47.669400   14667 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:35:47.669436   14667 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
I1219 02:35:47.671782   14667 main.go:144] libmachine: domain functional-199791 has defined MAC address 52:54:00:47:b9:97 in network mk-functional-199791
I1219 02:35:47.672091   14667 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:47:b9:97", ip: ""} in network mk-functional-199791: {Iface:virbr1 ExpiryTime:2025-12-19 03:33:22 +0000 UTC Type:0 Mac:52:54:00:47:b9:97 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-199791 Clientid:01:52:54:00:47:b9:97}
I1219 02:35:47.672110   14667 main.go:144] libmachine: domain functional-199791 has defined IP address 192.168.39.97 and MAC address 52:54:00:47:b9:97 in network mk-functional-199791
I1219 02:35:47.672232   14667 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/functional-199791/id_rsa Username:docker}
I1219 02:35:47.778259   14667 ssh_runner.go:195] Run: test -f /usr/bin/helm
I1219 02:35:47.781289   14667 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
I1219 02:35:47.784280   14667 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
I1219 02:35:48.589450   14667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
I1219 02:35:52.027013   14667 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.437511942s)
I1219 02:35:52.027102   14667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:35:52.343990   14667 addons.go:500] Verifying addon dashboard=true in "functional-199791"
I1219 02:35:52.347316   14667 out.go:179] * Verifying dashboard addon...
I1219 02:35:52.348912   14667 kapi.go:59] client config for functional-199791: &rest.Config{Host:"https://192.168.39.97:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.key", CAFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:35:52.349315   14667 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1219 02:35:52.349335   14667 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1219 02:35:52.349341   14667 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1219 02:35:52.349345   14667 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1219 02:35:52.349349   14667 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1219 02:35:52.349643   14667 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
I1219 02:35:52.362334   14667 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
I1219 02:35:52.362349   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:52.855825   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:53.355445   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:53.852916   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:54.353977   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:54.853269   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:55.352751   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:55.852990   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:56.353583   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:56.852916   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:57.353466   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:57.853001   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:58.353584   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:58.853053   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:59.353936   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:59.853141   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:00.353205   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:00.852435   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:01.353424   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:01.853030   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:02.354115   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:02.852530   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:03.353982   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:03.853647   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:04.353127   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:04.852248   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:05.353077   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:05.853922   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:06.354072   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:06.853694   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:07.353615   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:07.853276   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:08.352543   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:08.853783   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:09.353670   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:09.853886   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:10.353463   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:10.852518   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:11.353711   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:11.853080   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:12.353727   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:12.853531   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:13.353077   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:13.852379   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:14.352493   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:14.852883   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:15.354130   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:15.852650   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:16.355627   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:16.853739   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:17.356471   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:17.852830   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:18.353123   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:18.853901   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:19.354351   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:19.853390   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:20.353411   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:20.852998   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:21.353976   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:21.854632   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:22.352693   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:22.853202   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:23.353499   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:23.852661   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:24.353725   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:24.852905   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:25.356872   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:25.853483   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:26.353719   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:26.853954   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:27.354321   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:27.853464   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:28.352681   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:28.854763   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:29.354675   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:29.853222   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:30.353364   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:30.852696   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:31.353827   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:31.853780   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:32.353419   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:32.854157   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:33.355030   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:33.854776   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:34.354334   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:34.855091   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:35.354510   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:35.853879   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:36.353967   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:36.853114   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:37.353644   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:37.853715   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:38.353674   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:38.854508   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:39.353685   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:39.852784   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:40.352972   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:40.853522   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:41.353661   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:41.853324   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:42.352815   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:42.853311   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:43.352921   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:43.853042   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:44.354011   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:44.853651   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:45.353352   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:45.852581   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:46.353193   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:46.852298   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:47.353917   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:47.853086   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:48.352650   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:48.854185   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:49.353216   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:49.852333   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:50.352903   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:50.852924   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:51.354609   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:51.852937   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:52.353532   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:52.853414   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:53.354099   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:53.852778   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:54.353759   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:54.853524   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:55.353440   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:55.852464   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:56.353003   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:56.853789   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:57.354037   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:57.853159   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:58.352948   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:58.855041   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:59.352127   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:59.852326   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:00.352616   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:00.852801   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:01.353688   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:01.852937   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:02.353150   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:02.852341   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:03.352831   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:03.852995   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:04.353550   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:04.852965   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:05.353512   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:05.853009   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:06.353375   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:06.853130   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:07.352839   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:07.852993   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:08.353882   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:08.853361   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:09.352610   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:09.853144   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:10.352686   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:10.852961   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:11.353457   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:11.852490   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:12.353142   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:12.852537   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:13.352811   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:13.853045   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:14.353417   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:14.852464   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:15.353872   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:15.853293   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:16.352725   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:16.853194   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:17.352655   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:17.853108   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:18.352689   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:18.853228   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:19.352960   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:19.853176   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:20.353481   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:20.852920   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:21.353658   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:21.853133   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:22.353924   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:22.854272   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:23.353729   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:23.852890   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:24.353388   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:24.852422   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:25.353770   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:25.852942   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:26.353035   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:26.853504   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:27.353169   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:27.851857   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:28.353351   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:28.852812   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:29.353007   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:29.853300   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:30.352685   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:30.852875   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:31.353420   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:31.853563   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:32.353101   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:32.852344   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:33.353052   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:33.853413   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:34.353162   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:34.852250   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:35.353495   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:35.852707   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:36.353558   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:36.852869   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:37.353662   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:37.853347   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:38.353394   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:38.853084   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:39.352782   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:39.853138   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:40.352747   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:40.853018   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:41.353657   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:41.853248   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:42.352545   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:42.852868   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:43.353410   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:43.852139   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:44.352449   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:44.852561   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:45.353494   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:45.852487   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:46.352952   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:46.853835   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:47.353353   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:47.853243   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:48.352649   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:48.853843   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:49.353349   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:49.852514   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:50.352863   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:50.853123   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:51.352533   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:51.852662   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:52.353480   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:52.853686   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:53.353075   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:53.853356   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:54.353148   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:54.852317   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:55.352834   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:55.853256   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:56.352580   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:56.853609   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:57.352792   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:57.855980   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:58.353584   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:58.852773   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:59.353855   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:59.854505   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:00.352947   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:00.853396   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:01.352654   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:01.853719   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:02.353050   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:02.851856   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:03.353261   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:03.852100   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:04.352218   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:04.852736   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:05.353041   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:05.853135   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:06.352553   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:06.852701   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:07.353278   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:07.852043   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:08.353668   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:08.853496   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:09.353088   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:09.852234   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:10.352716   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:10.852893   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:11.353449   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:11.852328   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:12.352890   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:12.852799   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:13.353143   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:13.851937   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:14.353809   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:14.853614   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:15.352819   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:15.930518   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:16.353200   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:16.853271   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:17.352862   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:17.852974   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:18.353914   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:18.853260   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:19.352696   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:19.852889   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:20.353086   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:20.852408   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:21.352885   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:21.853200   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:22.352508   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:22.852878   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:23.353780   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:23.852615   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:24.353202   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:24.852214   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:25.352874   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:25.852764   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:26.353306   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:26.852013   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:27.353662   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:27.852627   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:28.353112   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:28.853594   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:29.352935   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:29.853213   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:30.352280   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:30.852591   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:31.353158   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:31.852297   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:32.352555   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:32.853071   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:33.353280   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:33.852843   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:34.353674   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:34.852791   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:35.353546   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:35.852304   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:36.352682   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:36.852886   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:37.353661   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:37.852564   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:38.352936   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:38.854329   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:39.352425   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:39.852027   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:40.354293   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:40.852489   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:41.353149   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:41.851757   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:42.352727   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:42.852587   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:43.353017   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:43.853387   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:44.352497   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:44.852715   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:45.353083   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:45.853541   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:46.353244   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:46.852591   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:47.353726   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:47.855652   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:48.353748   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:48.856266   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:49.352729   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:49.852920   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:50.353309   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:50.852949   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:51.353552   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:51.852811   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:52.353012   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:52.853657   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:53.353485   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:53.852921   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:54.353314   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:54.852852   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:55.353640   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:55.853121   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:56.352209   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:56.852127   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:57.352272   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:57.852005   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:58.353410   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:58.855317   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:59.352722   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:38:59.853140   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:00.352250   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:00.852301   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:01.352663   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:01.853105   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:02.353710   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:02.852674   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:03.352847   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:03.853245   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:04.352739   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:04.853214   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:05.353178   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:05.852328   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:06.352421   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:06.852623   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:07.353463   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:07.853269   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:08.353614   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:08.855766   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:09.353565   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:09.853020   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:10.352486   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:10.852696   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:11.353209   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:11.852482   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:12.352926   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:12.854031   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:13.353424   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:13.852910   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:14.353488   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:14.853293   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:39:15.352641   14667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-199791 -n functional-199791
helpers_test.go:253: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-199791 logs -n 25: (1.171287956s)
helpers_test.go:261: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-199791 ssh cat /etc/hostname                                                                                                                      │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ license │                                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh     │ functional-199791 ssh sudo systemctl is-active docker                                                                                                        │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │                     │
	│ ssh     │ functional-199791 ssh sudo systemctl is-active containerd                                                                                                    │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │                     │
	│ image   │ functional-199791 image load --daemon kicbase/echo-server:functional-199791 --alsologtostderr                                                                │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ image   │ functional-199791 image ls                                                                                                                                   │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ image   │ functional-199791 image load --daemon kicbase/echo-server:functional-199791 --alsologtostderr                                                                │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ image   │ functional-199791 image ls                                                                                                                                   │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ image   │ functional-199791 image load --daemon kicbase/echo-server:functional-199791 --alsologtostderr                                                                │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ image   │ functional-199791 image ls                                                                                                                                   │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ image   │ functional-199791 image save kicbase/echo-server:functional-199791 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ image   │ functional-199791 image rm kicbase/echo-server:functional-199791 --alsologtostderr                                                                           │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ image   │ functional-199791 image ls                                                                                                                                   │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ image   │ functional-199791 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ image   │ functional-199791 image ls                                                                                                                                   │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ image   │ functional-199791 image save --daemon kicbase/echo-server:functional-199791 --alsologtostderr                                                                │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh     │ functional-199791 ssh sudo cat /etc/ssl/certs/8937.pem                                                                                                       │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh     │ functional-199791 ssh sudo cat /usr/share/ca-certificates/8937.pem                                                                                           │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh     │ functional-199791 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                     │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh     │ functional-199791 ssh sudo cat /etc/ssl/certs/89372.pem                                                                                                      │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh     │ functional-199791 ssh sudo cat /usr/share/ca-certificates/89372.pem                                                                                          │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh     │ functional-199791 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                     │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ addons  │ functional-199791 addons list                                                                                                                                │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ addons  │ functional-199791 addons list -o json                                                                                                                        │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh     │ functional-199791 ssh sudo cat /etc/test/nested/copy/8937/hosts                                                                                              │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:39 UTC │ 19 Dec 25 02:39 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:35:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:35:47.431533   14641 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:35:47.431669   14641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:35:47.431680   14641 out.go:374] Setting ErrFile to fd 2...
	I1219 02:35:47.431687   14641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:35:47.431969   14641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 02:35:47.432515   14641 out.go:368] Setting JSON to false
	I1219 02:35:47.433644   14641 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1091,"bootTime":1766110656,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:35:47.433708   14641 start.go:143] virtualization: kvm guest
	I1219 02:35:47.436677   14641 out.go:179] * [functional-199791] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:35:47.437685   14641 notify.go:221] Checking for updates...
	I1219 02:35:47.437691   14641 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:35:47.438935   14641 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:35:47.439938   14641 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 02:35:47.441135   14641 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 02:35:47.442105   14641 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:35:47.443007   14641 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:35:47.444464   14641 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:35:47.445071   14641 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:35:47.479179   14641 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 02:35:47.480239   14641 start.go:309] selected driver: kvm2
	I1219 02:35:47.480254   14641 start.go:928] validating driver "kvm2" against &{Name:functional-199791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-199791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:35:47.480378   14641 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:35:47.481714   14641 cni.go:84] Creating CNI manager for ""
	I1219 02:35:47.481800   14641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 02:35:47.481873   14641 start.go:353] cluster config:
	{Name:functional-199791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-199791 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:35:47.483729   14641 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.141834313Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766112048141772211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:197523,},InodesUsed:&UInt64Value{Value:92,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62dbfeaa-5ba6-4582-89f4-25c1df616d31 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.142605858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c988302-8565-4318-903f-cb07657ba60b name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.142674474Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c988302-8565-4318-903f-cb07657ba60b name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.142970109Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:915300e33a8cd61c5f14bed9cadca6c9b66e6cfec608f7a40eb81b8c22445bc1,PodSandboxId:4cbdf2f437de9b7273e49b5dc8595eb4415bc58bb273c53514f2f223a8ce0565,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766111986017570016,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bb2654b-9961-46d3-b74f-9fbbcb8c3dc8,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a3c52996d6ef4428177d121063d57150d5c4de8dc99d3c67aa49e6528059cc0,PodSandboxId:4331c5201705a45b19bb3074d574f1e4e878066a8459f01541c8330d89b143e7,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766111781760253534,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22cf93c9-d4ef-4b69-aa11-9269bc23bee3,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4ecc916e87a11785556ec14c15e599641a328ac5360ee36fa978933c69d4a5,PodSandboxId:55ace58bc419cdf89c7b5f891a67b33aa94e08b0650fb0346336f7685e19670a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766111722971085720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009e7ad8-75b8-4205-91aa-980d65bb83a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dd47506e6db4c1c5bf0105d048007e7b6fce1395f5974e14bbc518c4faf04a,PodSandboxId:874c35fdc7a5c34d400476afb453084421777d1f710b1d675a3d383f0aaa7eaa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766111722942476552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5j8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d4a03-e8eb-4d11-bafc-d47ea5ede72e,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52a049ecd0fb48ec7198d01e8e295c6fb2a80e42ca2858b2d1b4c8077a828d55,PodSandboxId:09f936bd1f80b0df8536cee11289c78180e5f6b332e58d57e72fb7465096d86e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766111722986772477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8tvs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a0f0e2-fe99-419e-a874-319cfe3e8dd7,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46f0d2f8a60f0dbd158a46e846583eb03edc2033ef005567b93ea2262aa4fa05,PodSandboxId:7f5f836c530bc91e6907a850cf150a97eef32e673540fd842d2fd5070f192d36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766111719544120834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apise
rver-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64a96f1a0516105e4ebd728fb48b0648,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa115163640dc4719c1124afeb14c79fa01b41acf029663ee8bb4e570e78c375,PodSandboxId:ceba3cafce51714487340983f595fbeb46099c81d2f775a985fdfc62749c49cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766111
719332457450,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c51f768ca9cef53541023863070d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d903204735fb13ce21d9062ff0eb732ebe095fac0f7280517e6dd00c7ef9bd4e,PodSandboxId:e71c5e6505c8201b70109946b6c082cb4858fcf09de3d2f4913f2f8b80bd931c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766111719292096637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d29b63d26a2dfe9948639b49d1769e8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338f5e7eb5c69c5e72e44dc7916b142d97aaeaf9db504e426172bbb53a6d36eb,PodSandboxId:58e184d7505e42d78b545b795c3856aebe5a8d366597ea01932725c275ec8e05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850
085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766111716811044040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e3d43cf11f27df64ddec0bd25dc66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58657d9eea63330c183895d19b56ddc23095634d978e03443079491a1639f28b,PodSandboxId:654c1b461cf348f51f9f5081ee7d0a33524193a5db776507a0fa699a128fc2d4,Metadata:&Contain
erMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766111679289177450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5j8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d4a03-e8eb-4d11-bafc-d47ea5ede72e,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85725cf13093dada9e4553872d01e38d79622fc88f2912555c64a806f347b8c4,PodSandboxId:88a84164bc70bcb85b52eba377e96b2b29ec6ddf471223d4156de7b0eede6597,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,
},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766111679298440696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8tvs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a0f0e2-fe99-419e-a874-319cfe3e8dd7,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3075784cb0c1efff3c934fed9d726e8df01803e52a8cfe91ebf520417d3dfc95,PodSandboxId:e1d059d70d36e2c4c7d5370b5977a7ba91f8fbe8b2bb4509c589ebc9050e747d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766111679269407776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009e7ad8-75b8-4205-91aa-980d65bb83a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee52ab239e495b06e0329d9e7d462a9622090d0b3752eeed407bdedbf468603,PodSandboxId:19e4328300f57681fd9b1ecb6db8edf2f79368681bf85ed161a36a6261a1ad81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766111653480503920,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d29b63d26a2dfe9948639b49d1769e8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"prot
ocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb15be933a8593073b861a6d6cdf247d04730954732953f38f8926b908ce48ad,PodSandboxId:e5bbcc1d5d42b6c926b19ae16686fd957eb33b577c94f8d99f43ba93a7ce3004,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766111653427178217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c51f768ca9cef53541023863070d9e,},Annotations:map[string]string{io.kubern
etes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4284e1305c336fc0d68e458a9d6ce69c58edf45c2dcc81f806e69bd26c7e60,PodSandboxId:8d97915e3f5ac04d97267d1cfa0098c5318fd7389140d8cc4902a2205187ba66,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766111653351343328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 7e3d43cf11f27df64ddec0bd25dc66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c988302-8565-4318-903f-cb07657ba60b name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.177164239Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6163731b-a963-4a4a-a096-e5377b79bb8b name=/runtime.v1.RuntimeService/Version
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.177359042Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6163731b-a963-4a4a-a096-e5377b79bb8b name=/runtime.v1.RuntimeService/Version
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.178545934Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64765dd9-6aad-4464-83c4-3e8906a688b1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.179144433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766112048179125981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:197523,},InodesUsed:&UInt64Value{Value:92,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64765dd9-6aad-4464-83c4-3e8906a688b1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.180096120Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=afb00a6f-ad0c-4937-a6f7-f084f75c5299 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.180230556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=afb00a6f-ad0c-4937-a6f7-f084f75c5299 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.180759231Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:915300e33a8cd61c5f14bed9cadca6c9b66e6cfec608f7a40eb81b8c22445bc1,PodSandboxId:4cbdf2f437de9b7273e49b5dc8595eb4415bc58bb273c53514f2f223a8ce0565,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766111986017570016,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bb2654b-9961-46d3-b74f-9fbbcb8c3dc8,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a3c52996d6ef4428177d121063d57150d5c4de8dc99d3c67aa49e6528059cc0,PodSandboxId:4331c5201705a45b19bb3074d574f1e4e878066a8459f01541c8330d89b143e7,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766111781760253534,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22cf93c9-d4ef-4b69-aa11-9269bc23bee3,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4ecc916e87a11785556ec14c15e599641a328ac5360ee36fa978933c69d4a5,PodSandboxId:55ace58bc419cdf89c7b5f891a67b33aa94e08b0650fb0346336f7685e19670a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766111722971085720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009e7ad8-75b8-4205-91aa-980d65bb83a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dd47506e6db4c1c5bf0105d048007e7b6fce1395f5974e14bbc518c4faf04a,PodSandboxId:874c35fdc7a5c34d400476afb453084421777d1f710b1d675a3d383f0aaa7eaa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766111722942476552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5j8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d4a03-e8eb-4d11-bafc-d47ea5ede72e,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52a049ecd0fb48ec7198d01e8e295c6fb2a80e42ca2858b2d1b4c8077a828d55,PodSandboxId:09f936bd1f80b0df8536cee11289c78180e5f6b332e58d57e72fb7465096d86e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766111722986772477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8tvs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a0f0e2-fe99-419e-a874-319cfe3e8dd7,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46f0d2f8a60f0dbd158a46e846583eb03edc2033ef005567b93ea2262aa4fa05,PodSandboxId:7f5f836c530bc91e6907a850cf150a97eef32e673540fd842d2fd5070f192d36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766111719544120834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apise
rver-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64a96f1a0516105e4ebd728fb48b0648,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa115163640dc4719c1124afeb14c79fa01b41acf029663ee8bb4e570e78c375,PodSandboxId:ceba3cafce51714487340983f595fbeb46099c81d2f775a985fdfc62749c49cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766111
719332457450,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c51f768ca9cef53541023863070d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d903204735fb13ce21d9062ff0eb732ebe095fac0f7280517e6dd00c7ef9bd4e,PodSandboxId:e71c5e6505c8201b70109946b6c082cb4858fcf09de3d2f4913f2f8b80bd931c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766111719292096637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d29b63d26a2dfe9948639b49d1769e8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338f5e7eb5c69c5e72e44dc7916b142d97aaeaf9db504e426172bbb53a6d36eb,PodSandboxId:58e184d7505e42d78b545b795c3856aebe5a8d366597ea01932725c275ec8e05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850
085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766111716811044040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e3d43cf11f27df64ddec0bd25dc66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58657d9eea63330c183895d19b56ddc23095634d978e03443079491a1639f28b,PodSandboxId:654c1b461cf348f51f9f5081ee7d0a33524193a5db776507a0fa699a128fc2d4,Metadata:&Contain
erMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766111679289177450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5j8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d4a03-e8eb-4d11-bafc-d47ea5ede72e,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85725cf13093dada9e4553872d01e38d79622fc88f2912555c64a806f347b8c4,PodSandboxId:88a84164bc70bcb85b52eba377e96b2b29ec6ddf471223d4156de7b0eede6597,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,
},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766111679298440696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8tvs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a0f0e2-fe99-419e-a874-319cfe3e8dd7,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3075784cb0c1efff3c934fed9d726e8df01803e52a8cfe91ebf520417d3dfc95,PodSandboxId:e1d059d70d36e2c4c7d5370b5977a7ba91f8fbe8b2bb4509c589ebc9050e747d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766111679269407776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009e7ad8-75b8-4205-91aa-980d65bb83a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee52ab239e495b06e0329d9e7d462a9622090d0b3752eeed407bdedbf468603,PodSandboxId:19e4328300f57681fd9b1ecb6db8edf2f79368681bf85ed161a36a6261a1ad81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766111653480503920,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d29b63d26a2dfe9948639b49d1769e8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"prot
ocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb15be933a8593073b861a6d6cdf247d04730954732953f38f8926b908ce48ad,PodSandboxId:e5bbcc1d5d42b6c926b19ae16686fd957eb33b577c94f8d99f43ba93a7ce3004,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766111653427178217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c51f768ca9cef53541023863070d9e,},Annotations:map[string]string{io.kubern
etes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4284e1305c336fc0d68e458a9d6ce69c58edf45c2dcc81f806e69bd26c7e60,PodSandboxId:8d97915e3f5ac04d97267d1cfa0098c5318fd7389140d8cc4902a2205187ba66,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766111653351343328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 7e3d43cf11f27df64ddec0bd25dc66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=afb00a6f-ad0c-4937-a6f7-f084f75c5299 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.216896571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27670971-1dce-4628-9a22-2a7df13790dd name=/runtime.v1.RuntimeService/Version
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.217102435Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27670971-1dce-4628-9a22-2a7df13790dd name=/runtime.v1.RuntimeService/Version
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.218250982Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=855b1ced-41d2-41ad-abb5-40b7eca33b0f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.218941549Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766112048218922427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:197523,},InodesUsed:&UInt64Value{Value:92,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=855b1ced-41d2-41ad-abb5-40b7eca33b0f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.219870773Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76de3bd1-f406-4066-a1a4-3b0b96e6f60c name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.219935685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76de3bd1-f406-4066-a1a4-3b0b96e6f60c name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.220442174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:915300e33a8cd61c5f14bed9cadca6c9b66e6cfec608f7a40eb81b8c22445bc1,PodSandboxId:4cbdf2f437de9b7273e49b5dc8595eb4415bc58bb273c53514f2f223a8ce0565,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766111986017570016,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bb2654b-9961-46d3-b74f-9fbbcb8c3dc8,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a3c52996d6ef4428177d121063d57150d5c4de8dc99d3c67aa49e6528059cc0,PodSandboxId:4331c5201705a45b19bb3074d574f1e4e878066a8459f01541c8330d89b143e7,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766111781760253534,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22cf93c9-d4ef-4b69-aa11-9269bc23bee3,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4ecc916e87a11785556ec14c15e599641a328ac5360ee36fa978933c69d4a5,PodSandboxId:55ace58bc419cdf89c7b5f891a67b33aa94e08b0650fb0346336f7685e19670a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766111722971085720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009e7ad8-75b8-4205-91aa-980d65bb83a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dd47506e6db4c1c5bf0105d048007e7b6fce1395f5974e14bbc518c4faf04a,PodSandboxId:874c35fdc7a5c34d400476afb453084421777d1f710b1d675a3d383f0aaa7eaa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766111722942476552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5j8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d4a03-e8eb-4d11-bafc-d47ea5ede72e,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52a049ecd0fb48ec7198d01e8e295c6fb2a80e42ca2858b2d1b4c8077a828d55,PodSandboxId:09f936bd1f80b0df8536cee11289c78180e5f6b332e58d57e72fb7465096d86e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766111722986772477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8tvs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a0f0e2-fe99-419e-a874-319cfe3e8dd7,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46f0d2f8a60f0dbd158a46e846583eb03edc2033ef005567b93ea2262aa4fa05,PodSandboxId:7f5f836c530bc91e6907a850cf150a97eef32e673540fd842d2fd5070f192d36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766111719544120834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apise
rver-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64a96f1a0516105e4ebd728fb48b0648,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa115163640dc4719c1124afeb14c79fa01b41acf029663ee8bb4e570e78c375,PodSandboxId:ceba3cafce51714487340983f595fbeb46099c81d2f775a985fdfc62749c49cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766111
719332457450,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c51f768ca9cef53541023863070d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d903204735fb13ce21d9062ff0eb732ebe095fac0f7280517e6dd00c7ef9bd4e,PodSandboxId:e71c5e6505c8201b70109946b6c082cb4858fcf09de3d2f4913f2f8b80bd931c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766111719292096637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d29b63d26a2dfe9948639b49d1769e8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338f5e7eb5c69c5e72e44dc7916b142d97aaeaf9db504e426172bbb53a6d36eb,PodSandboxId:58e184d7505e42d78b545b795c3856aebe5a8d366597ea01932725c275ec8e05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850
085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766111716811044040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e3d43cf11f27df64ddec0bd25dc66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58657d9eea63330c183895d19b56ddc23095634d978e03443079491a1639f28b,PodSandboxId:654c1b461cf348f51f9f5081ee7d0a33524193a5db776507a0fa699a128fc2d4,Metadata:&Contain
erMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766111679289177450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5j8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d4a03-e8eb-4d11-bafc-d47ea5ede72e,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85725cf13093dada9e4553872d01e38d79622fc88f2912555c64a806f347b8c4,PodSandboxId:88a84164bc70bcb85b52eba377e96b2b29ec6ddf471223d4156de7b0eede6597,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,
},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766111679298440696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8tvs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a0f0e2-fe99-419e-a874-319cfe3e8dd7,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3075784cb0c1efff3c934fed9d726e8df01803e52a8cfe91ebf520417d3dfc95,PodSandboxId:e1d059d70d36e2c4c7d5370b5977a7ba91f8fbe8b2bb4509c589ebc9050e747d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766111679269407776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009e7ad8-75b8-4205-91aa-980d65bb83a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee52ab239e495b06e0329d9e7d462a9622090d0b3752eeed407bdedbf468603,PodSandboxId:19e4328300f57681fd9b1ecb6db8edf2f79368681bf85ed161a36a6261a1ad81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766111653480503920,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d29b63d26a2dfe9948639b49d1769e8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"prot
ocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb15be933a8593073b861a6d6cdf247d04730954732953f38f8926b908ce48ad,PodSandboxId:e5bbcc1d5d42b6c926b19ae16686fd957eb33b577c94f8d99f43ba93a7ce3004,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766111653427178217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c51f768ca9cef53541023863070d9e,},Annotations:map[string]string{io.kubern
etes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4284e1305c336fc0d68e458a9d6ce69c58edf45c2dcc81f806e69bd26c7e60,PodSandboxId:8d97915e3f5ac04d97267d1cfa0098c5318fd7389140d8cc4902a2205187ba66,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766111653351343328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 7e3d43cf11f27df64ddec0bd25dc66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76de3bd1-f406-4066-a1a4-3b0b96e6f60c name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.248370537Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a08023cf-7189-4885-b305-c62385ca2b37 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.248592692Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a08023cf-7189-4885-b305-c62385ca2b37 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.250297326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c39939d1-023a-4531-8189-384d95d6b1f2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.251168318Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766112048251144744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:197523,},InodesUsed:&UInt64Value{Value:92,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c39939d1-023a-4531-8189-384d95d6b1f2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.252004068Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87a265ba-d25c-4494-816d-f5881a801ac2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.252056375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87a265ba-d25c-4494-816d-f5881a801ac2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:40:48 functional-199791 crio[5342]: time="2025-12-19 02:40:48.252302195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:915300e33a8cd61c5f14bed9cadca6c9b66e6cfec608f7a40eb81b8c22445bc1,PodSandboxId:4cbdf2f437de9b7273e49b5dc8595eb4415bc58bb273c53514f2f223a8ce0565,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766111986017570016,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bb2654b-9961-46d3-b74f-9fbbcb8c3dc8,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a3c52996d6ef4428177d121063d57150d5c4de8dc99d3c67aa49e6528059cc0,PodSandboxId:4331c5201705a45b19bb3074d574f1e4e878066a8459f01541c8330d89b143e7,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766111781760253534,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22cf93c9-d4ef-4b69-aa11-9269bc23bee3,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4ecc916e87a11785556ec14c15e599641a328ac5360ee36fa978933c69d4a5,PodSandboxId:55ace58bc419cdf89c7b5f891a67b33aa94e08b0650fb0346336f7685e19670a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766111722971085720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009e7ad8-75b8-4205-91aa-980d65bb83a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dd47506e6db4c1c5bf0105d048007e7b6fce1395f5974e14bbc518c4faf04a,PodSandboxId:874c35fdc7a5c34d400476afb453084421777d1f710b1d675a3d383f0aaa7eaa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766111722942476552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5j8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d4a03-e8eb-4d11-bafc-d47ea5ede72e,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52a049ecd0fb48ec7198d01e8e295c6fb2a80e42ca2858b2d1b4c8077a828d55,PodSandboxId:09f936bd1f80b0df8536cee11289c78180e5f6b332e58d57e72fb7465096d86e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766111722986772477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8tvs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a0f0e2-fe99-419e-a874-319cfe3e8dd7,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46f0d2f8a60f0dbd158a46e846583eb03edc2033ef005567b93ea2262aa4fa05,PodSandboxId:7f5f836c530bc91e6907a850cf150a97eef32e673540fd842d2fd5070f192d36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766111719544120834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apise
rver-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64a96f1a0516105e4ebd728fb48b0648,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa115163640dc4719c1124afeb14c79fa01b41acf029663ee8bb4e570e78c375,PodSandboxId:ceba3cafce51714487340983f595fbeb46099c81d2f775a985fdfc62749c49cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766111
719332457450,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c51f768ca9cef53541023863070d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d903204735fb13ce21d9062ff0eb732ebe095fac0f7280517e6dd00c7ef9bd4e,PodSandboxId:e71c5e6505c8201b70109946b6c082cb4858fcf09de3d2f4913f2f8b80bd931c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766111719292096637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d29b63d26a2dfe9948639b49d1769e8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338f5e7eb5c69c5e72e44dc7916b142d97aaeaf9db504e426172bbb53a6d36eb,PodSandboxId:58e184d7505e42d78b545b795c3856aebe5a8d366597ea01932725c275ec8e05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850
085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766111716811044040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e3d43cf11f27df64ddec0bd25dc66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58657d9eea63330c183895d19b56ddc23095634d978e03443079491a1639f28b,PodSandboxId:654c1b461cf348f51f9f5081ee7d0a33524193a5db776507a0fa699a128fc2d4,Metadata:&Contain
erMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766111679289177450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5j8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d4a03-e8eb-4d11-bafc-d47ea5ede72e,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85725cf13093dada9e4553872d01e38d79622fc88f2912555c64a806f347b8c4,PodSandboxId:88a84164bc70bcb85b52eba377e96b2b29ec6ddf471223d4156de7b0eede6597,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,
},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766111679298440696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8tvs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a0f0e2-fe99-419e-a874-319cfe3e8dd7,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3075784cb0c1efff3c934fed9d726e8df01803e52a8cfe91ebf520417d3dfc95,PodSandboxId:e1d059d70d36e2c4c7d5370b5977a7ba91f8fbe8b2bb4509c589ebc9050e747d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766111679269407776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009e7ad8-75b8-4205-91aa-980d65bb83a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee52ab239e495b06e0329d9e7d462a9622090d0b3752eeed407bdedbf468603,PodSandboxId:19e4328300f57681fd9b1ecb6db8edf2f79368681bf85ed161a36a6261a1ad81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766111653480503920,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d29b63d26a2dfe9948639b49d1769e8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"prot
ocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb15be933a8593073b861a6d6cdf247d04730954732953f38f8926b908ce48ad,PodSandboxId:e5bbcc1d5d42b6c926b19ae16686fd957eb33b577c94f8d99f43ba93a7ce3004,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766111653427178217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c51f768ca9cef53541023863070d9e,},Annotations:map[string]string{io.kubern
etes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4284e1305c336fc0d68e458a9d6ce69c58edf45c2dcc81f806e69bd26c7e60,PodSandboxId:8d97915e3f5ac04d97267d1cfa0098c5318fd7389140d8cc4902a2205187ba66,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766111653351343328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 7e3d43cf11f27df64ddec0bd25dc66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87a265ba-d25c-4494-816d-f5881a801ac2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	915300e33a8cd       04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5                                      About a minute ago   Running             myfrontend                0                   4cbdf2f437de9       sp-pod                                      default
	7a3c52996d6ef       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 minutes ago        Exited              mount-munger              0                   4331c5201705a       busybox-mount                               default
	52a049ecd0fb4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      5 minutes ago        Running             coredns                   2                   09f936bd1f80b       coredns-66bc5c9577-8tvs6                    kube-system
	4e4ecc916e87a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago        Running             storage-provisioner       3                   55ace58bc419c       storage-provisioner                         kube-system
	14dd47506e6db       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      5 minutes ago        Running             kube-proxy                2                   874c35fdc7a5c       kube-proxy-m5j8g                            kube-system
	46f0d2f8a60f0       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      5 minutes ago        Running             kube-apiserver            0                   7f5f836c530bc       kube-apiserver-functional-199791            kube-system
	fa115163640dc       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      5 minutes ago        Running             kube-controller-manager   2                   ceba3cafce517       kube-controller-manager-functional-199791   kube-system
	d903204735fb1       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      5 minutes ago        Running             kube-scheduler            2                   e71c5e6505c82       kube-scheduler-functional-199791            kube-system
	338f5e7eb5c69       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      5 minutes ago        Running             etcd                      2                   58e184d7505e4       etcd-functional-199791                      kube-system
	85725cf13093d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago        Exited              coredns                   1                   88a84164bc70b       coredns-66bc5c9577-8tvs6                    kube-system
	58657d9eea633       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      6 minutes ago        Exited              kube-proxy                1                   654c1b461cf34       kube-proxy-m5j8g                            kube-system
	3075784cb0c1e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago        Exited              storage-provisioner       2                   e1d059d70d36e       storage-provisioner                         kube-system
	bee52ab239e49       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      6 minutes ago        Exited              kube-scheduler            1                   19e4328300f57       kube-scheduler-functional-199791            kube-system
	eb15be933a859       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      6 minutes ago        Exited              kube-controller-manager   1                   e5bbcc1d5d42b       kube-controller-manager-functional-199791   kube-system
	cd4284e1305c3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      6 minutes ago        Exited              etcd                      1                   8d97915e3f5ac       etcd-functional-199791                      kube-system
	
	
	==> coredns [52a049ecd0fb48ec7198d01e8e295c6fb2a80e42ca2858b2d1b4c8077a828d55] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45039 - 52469 "HINFO IN 1968621207169872046.2815946503348017369. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.013061154s
	
	
	==> coredns [85725cf13093dada9e4553872d01e38d79622fc88f2912555c64a806f347b8c4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45018 - 36548 "HINFO IN 5046039487923452892.2455866171052880201. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063275704s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-199791
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-199791
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=functional-199791
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T02_33_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 02:33:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-199791
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 02:40:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 02:39:58 +0000   Fri, 19 Dec 2025 02:33:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 02:39:58 +0000   Fri, 19 Dec 2025 02:33:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 02:39:58 +0000   Fri, 19 Dec 2025 02:33:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 02:39:58 +0000   Fri, 19 Dec 2025 02:33:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    functional-199791
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 234b0954db5c424884f9cda9cc9df33d
	  System UUID:                234b0954-db5c-4248-84f9-cda9cc9df33d
	  Boot ID:                    da2082df-9ed6-451c-a2bd-b19411e49feb
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7lwrx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  default                     hello-node-connect-7d85dfc575-2gcm4                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  default                     mysql-6bcdcbc558-qcs65                                   600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    57s
	  default                     sp-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 coredns-66bc5c9577-8tvs6                                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m1s
	  kube-system                 etcd-functional-199791                                   100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m7s
	  kube-system                 kube-apiserver-functional-199791                         250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-controller-manager-functional-199791                200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 kube-proxy-m5j8g                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m2s
	  kube-system                 kube-scheduler-functional-199791                         100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m
	  kubernetes-dashboard        kubernetes-dashboard-api-55487dd988-m95fb                100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    4m56s
	  kubernetes-dashboard        kubernetes-dashboard-auth-59779df8d5-n7zvj               100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    4m56s
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-shqrq                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-46x9w    100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    4m56s
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-jmj2w                100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1750m (87%)   1700m (85%)
	  memory             1482Mi (37%)  2470Mi (63%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m1s                   kube-proxy       
	  Normal  Starting                 5m24s                  kube-proxy       
	  Normal  Starting                 6m8s                   kube-proxy       
	  Normal  Starting                 7m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m13s (x8 over 7m13s)  kubelet          Node functional-199791 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m13s (x8 over 7m13s)  kubelet          Node functional-199791 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m13s (x7 over 7m13s)  kubelet          Node functional-199791 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     7m7s                   kubelet          Node functional-199791 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m7s                   kubelet          Node functional-199791 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m7s                   kubelet          Node functional-199791 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m7s                   kubelet          Starting kubelet.
	  Normal  NodeReady                7m6s                   kubelet          Node functional-199791 status is now: NodeReady
	  Normal  RegisteredNode           7m3s                   node-controller  Node functional-199791 event: Registered Node functional-199791 in Controller
	  Normal  Starting                 6m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    6m33s (x8 over 6m33s)  kubelet          Node functional-199791 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  6m33s (x8 over 6m33s)  kubelet          Node functional-199791 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     6m33s (x7 over 6m33s)  kubelet          Node functional-199791 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m6s                   node-controller  Node functional-199791 event: Registered Node functional-199791 in Controller
	  Normal  Starting                 5m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m30s (x8 over 5m30s)  kubelet          Node functional-199791 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m30s (x8 over 5m30s)  kubelet          Node functional-199791 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m30s (x7 over 5m30s)  kubelet          Node functional-199791 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m23s                  node-controller  Node functional-199791 event: Registered Node functional-199791 in Controller
	
	
	==> dmesg <==
	[  +0.009008] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.178847] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083161] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.103253] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.119955] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.402477] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.562650] kauditd_printk_skb: 248 callbacks suppressed
	[Dec19 02:34] kauditd_printk_skb: 45 callbacks suppressed
	[  +7.285406] kauditd_printk_skb: 56 callbacks suppressed
	[ +20.521325] kauditd_printk_skb: 277 callbacks suppressed
	[  +1.896560] kauditd_printk_skb: 57 callbacks suppressed
	[Dec19 02:35] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.828763] kauditd_printk_skb: 78 callbacks suppressed
	[  +4.325250] kauditd_printk_skb: 281 callbacks suppressed
	[ +17.406952] kauditd_printk_skb: 51 callbacks suppressed
	[  +0.000034] kauditd_printk_skb: 133 callbacks suppressed
	[  +0.502326] kauditd_printk_skb: 197 callbacks suppressed
	[Dec19 02:36] kauditd_printk_skb: 47 callbacks suppressed
	[  +1.002971] kauditd_printk_skb: 29 callbacks suppressed
	[ +16.463942] kauditd_printk_skb: 38 callbacks suppressed
	[Dec19 02:39] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.054527] kauditd_printk_skb: 61 callbacks suppressed
	[Dec19 02:40] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [338f5e7eb5c69c5e72e44dc7916b142d97aaeaf9db504e426172bbb53a6d36eb] <==
	{"level":"warn","ts":"2025-12-19T02:35:21.339848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:21.345521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:21.354709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:21.366114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:21.373654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:21.383577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:21.394226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:21.404966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:21.414031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:21.427548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:21.434852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:21.444975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:21.501862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:55.577169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:55.595650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:55.624011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:55.645855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:55.661721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:55.689315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:55.698102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:55.714978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:55.727080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:55.742734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:55.753327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:55.766005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44420","server-name":"","error":"EOF"}
	
	
	==> etcd [cd4284e1305c336fc0d68e458a9d6ce69c58edf45c2dcc81f806e69bd26c7e60] <==
	{"level":"warn","ts":"2025-12-19T02:34:38.124414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:34:38.133292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:34:38.140842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:34:38.150636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:34:38.156220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:34:38.163970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:34:38.218090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43278","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T02:35:04.373005Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-19T02:35:04.373101Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-199791","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"]}
	{"level":"error","ts":"2025-12-19T02:35:04.373204Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:35:04.446479Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:35:04.447903Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:35:04.448008Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f61fae125a956d36","current-leader-member-id":"f61fae125a956d36"}
	{"level":"warn","ts":"2025-12-19T02:35:04.448024Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:35:04.448091Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T02:35:04.448099Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:35:04.448111Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-19T02:35:04.448123Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-19T02:35:04.448208Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.97:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:35:04.448225Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.97:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T02:35:04.448230Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.97:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:35:04.450532Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"error","ts":"2025-12-19T02:35:04.450598Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.97:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:35:04.450618Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2025-12-19T02:35:04.450623Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-199791","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"]}
	
	
	==> kernel <==
	 02:40:48 up 7 min,  0 users,  load average: 0.86, 0.56, 0.28
	Linux functional-199791 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [46f0d2f8a60f0dbd158a46e846583eb03edc2033ef005567b93ea2262aa4fa05] <==
	I1219 02:35:49.409062       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:35:49.422697       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:35:49.432755       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:35:51.833696       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 02:35:51.908690       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.99.54.140"}
	I1219 02:35:51.927006       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.100.17.50"}
	I1219 02:35:51.931223       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.108.89.26"}
	I1219 02:35:51.949104       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.101.76.242"}
	I1219 02:35:51.953340       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.36.176"}
	W1219 02:35:55.576924       1 logging.go:55] [core] [Channel #262 SubChannel #263]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.595559       1 logging.go:55] [core] [Channel #266 SubChannel #267]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.619008       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.640174       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.661743       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.689078       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.698101       1 logging.go:55] [core] [Channel #286 SubChannel #287]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.714985       1 logging.go:55] [core] [Channel #290 SubChannel #291]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.726744       1 logging.go:55] [core] [Channel #294 SubChannel #295]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 02:35:55.742735       1 logging.go:55] [core] [Channel #298 SubChannel #299]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.753287       1 logging.go:55] [core] [Channel #302 SubChannel #303]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.765961       1 logging.go:55] [core] [Channel #306 SubChannel #307]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1219 02:36:36.812853       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.160.98"}
	E1219 02:39:44.126008       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:48230: use of closed network connection
	E1219 02:39:51.606508       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:53096: use of closed network connection
	I1219 02:39:51.881111       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.103.219"}
	
	
	==> kube-controller-manager [eb15be933a8593073b861a6d6cdf247d04730954732953f38f8926b908ce48ad] <==
	I1219 02:34:42.166042       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1219 02:34:42.169402       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1219 02:34:42.171680       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1219 02:34:42.171700       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1219 02:34:42.171764       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1219 02:34:42.171874       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-199791"
	I1219 02:34:42.171917       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1219 02:34:42.175086       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 02:34:42.178845       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1219 02:34:42.182126       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1219 02:34:42.186364       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 02:34:42.187534       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1219 02:34:42.193743       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1219 02:34:42.198169       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1219 02:34:42.201146       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1219 02:34:42.201180       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1219 02:34:42.201488       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 02:34:42.201543       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1219 02:34:42.201917       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 02:34:42.212299       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1219 02:34:42.212376       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 02:34:42.212441       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 02:34:42.212461       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1219 02:34:42.217978       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 02:34:42.220151       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-controller-manager [fa115163640dc4719c1124afeb14c79fa01b41acf029663ee8bb4e570e78c375] <==
	I1219 02:35:25.582388       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1219 02:35:25.582435       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1219 02:35:25.582454       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1219 02:35:25.582459       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1219 02:35:25.584454       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1219 02:35:25.592770       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 02:35:25.603743       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 02:35:25.603768       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 02:35:25.603774       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1219 02:35:25.604819       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1219 02:35:25.605873       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:35:25.608098       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 02:35:55.566516       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongconsumers.configuration.konghq.com"
	I1219 02:35:55.566569       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongcustomentities.configuration.konghq.com"
	I1219 02:35:55.566600       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="tcpingresses.configuration.konghq.com"
	I1219 02:35:55.566618       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="udpingresses.configuration.konghq.com"
	I1219 02:35:55.566633       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongupstreampolicies.configuration.konghq.com"
	I1219 02:35:55.566657       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongingresses.configuration.konghq.com"
	I1219 02:35:55.566681       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongplugins.configuration.konghq.com"
	I1219 02:35:55.566700       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingressclassparameterses.configuration.konghq.com"
	I1219 02:35:55.566725       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongconsumergroups.configuration.konghq.com"
	I1219 02:35:55.567019       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1219 02:35:55.609293       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1219 02:35:56.768760       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:35:56.810659       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [14dd47506e6db4c1c5bf0105d048007e7b6fce1395f5974e14bbc518c4faf04a] <==
	I1219 02:35:23.503232       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 02:35:23.603506       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 02:35:23.603535       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.97"]
	E1219 02:35:23.603650       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:35:23.635171       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 02:35:23.635275       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 02:35:23.635368       1 server_linux.go:132] "Using iptables Proxier"
	I1219 02:35:23.643469       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:35:23.643693       1 server.go:527] "Version info" version="v1.34.3"
	I1219 02:35:23.643719       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:35:23.647725       1 config.go:200] "Starting service config controller"
	I1219 02:35:23.647766       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:35:23.647778       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:35:23.647870       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:35:23.647909       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:35:23.647914       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:35:23.650778       1 config.go:309] "Starting node config controller"
	I1219 02:35:23.650960       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:35:23.651026       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:35:23.748896       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:35:23.748934       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:35:23.748950       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [58657d9eea63330c183895d19b56ddc23095634d978e03443079491a1639f28b] <==
	I1219 02:34:39.537664       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 02:34:39.638974       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 02:34:39.639014       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.97"]
	E1219 02:34:39.639074       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:34:39.669314       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 02:34:39.669367       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 02:34:39.669389       1 server_linux.go:132] "Using iptables Proxier"
	I1219 02:34:39.677347       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:34:39.677560       1 server.go:527] "Version info" version="v1.34.3"
	I1219 02:34:39.677584       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:34:39.681637       1 config.go:200] "Starting service config controller"
	I1219 02:34:39.681677       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:34:39.681702       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:34:39.681717       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:34:39.681743       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:34:39.681757       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:34:39.683323       1 config.go:309] "Starting node config controller"
	I1219 02:34:39.683545       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:34:39.683568       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:34:39.781988       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:34:39.782036       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:34:39.782051       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bee52ab239e495b06e0329d9e7d462a9622090d0b3752eeed407bdedbf468603] <==
	I1219 02:34:36.594680       1 serving.go:386] Generated self-signed cert in-memory
	W1219 02:34:38.741608       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 02:34:38.741671       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 02:34:38.741693       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 02:34:38.741710       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 02:34:38.844049       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 02:34:38.847658       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:34:38.851579       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 02:34:38.852772       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 02:34:38.852857       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:34:38.867019       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:34:38.967418       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:35:04.388266       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1219 02:35:04.388679       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1219 02:35:04.388863       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1219 02:35:04.389100       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:35:04.389647       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1219 02:35:04.389748       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d903204735fb13ce21d9062ff0eb732ebe095fac0f7280517e6dd00c7ef9bd4e] <==
	I1219 02:35:21.360923       1 serving.go:386] Generated self-signed cert in-memory
	I1219 02:35:23.329893       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 02:35:23.330954       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:35:23.339922       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1219 02:35:23.340017       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1219 02:35:23.340119       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:35:23.340144       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:35:23.340167       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 02:35:23.340191       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 02:35:23.341060       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 02:35:23.341399       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 02:35:23.441256       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1219 02:35:23.441470       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:35:23.441501       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 19 02:40:08 functional-199791 kubelet[6110]: E1219 02:40:08.751140    6110 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 19 02:40:08 functional-199791 kubelet[6110]: E1219 02:40:08.751318    6110 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-7lwrx_default(96192f1e-8144-4970-a848-961ca9d6a26b): ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 19 02:40:08 functional-199791 kubelet[6110]: E1219 02:40:08.751350    6110 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-7lwrx" podUID="96192f1e-8144-4970-a848-961ca9d6a26b"
	Dec 19 02:40:08 functional-199791 kubelet[6110]: E1219 02:40:08.859141    6110 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766112008858851327  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:197523}  inodes_used:{value:92}}"
	Dec 19 02:40:08 functional-199791 kubelet[6110]: E1219 02:40:08.859647    6110 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766112008858851327  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:197523}  inodes_used:{value:92}}"
	Dec 19 02:40:18 functional-199791 kubelet[6110]: E1219 02:40:18.781236    6110 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6d29b63d26a2dfe9948639b49d1769e8/crio-19e4328300f57681fd9b1ecb6db8edf2f79368681bf85ed161a36a6261a1ad81: Error finding container 19e4328300f57681fd9b1ecb6db8edf2f79368681bf85ed161a36a6261a1ad81: Status 404 returned error can't find the container with id 19e4328300f57681fd9b1ecb6db8edf2f79368681bf85ed161a36a6261a1ad81
	Dec 19 02:40:18 functional-199791 kubelet[6110]: E1219 02:40:18.781885    6110 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd3a0f0e2-fe99-419e-a874-319cfe3e8dd7/crio-88a84164bc70bcb85b52eba377e96b2b29ec6ddf471223d4156de7b0eede6597: Error finding container 88a84164bc70bcb85b52eba377e96b2b29ec6ddf471223d4156de7b0eede6597: Status 404 returned error can't find the container with id 88a84164bc70bcb85b52eba377e96b2b29ec6ddf471223d4156de7b0eede6597
	Dec 19 02:40:18 functional-199791 kubelet[6110]: E1219 02:40:18.782172    6110 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podd29d4a03-e8eb-4d11-bafc-d47ea5ede72e/crio-654c1b461cf348f51f9f5081ee7d0a33524193a5db776507a0fa699a128fc2d4: Error finding container 654c1b461cf348f51f9f5081ee7d0a33524193a5db776507a0fa699a128fc2d4: Status 404 returned error can't find the container with id 654c1b461cf348f51f9f5081ee7d0a33524193a5db776507a0fa699a128fc2d4
	Dec 19 02:40:18 functional-199791 kubelet[6110]: E1219 02:40:18.782447    6110 manager.go:1116] Failed to create existing container: /kubepods/burstable/podf0c51f768ca9cef53541023863070d9e/crio-e5bbcc1d5d42b6c926b19ae16686fd957eb33b577c94f8d99f43ba93a7ce3004: Error finding container e5bbcc1d5d42b6c926b19ae16686fd957eb33b577c94f8d99f43ba93a7ce3004: Status 404 returned error can't find the container with id e5bbcc1d5d42b6c926b19ae16686fd957eb33b577c94f8d99f43ba93a7ce3004
	Dec 19 02:40:18 functional-199791 kubelet[6110]: E1219 02:40:18.782680    6110 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod009e7ad8-75b8-4205-91aa-980d65bb83a4/crio-e1d059d70d36e2c4c7d5370b5977a7ba91f8fbe8b2bb4509c589ebc9050e747d: Error finding container e1d059d70d36e2c4c7d5370b5977a7ba91f8fbe8b2bb4509c589ebc9050e747d: Status 404 returned error can't find the container with id e1d059d70d36e2c4c7d5370b5977a7ba91f8fbe8b2bb4509c589ebc9050e747d
	Dec 19 02:40:18 functional-199791 kubelet[6110]: E1219 02:40:18.783176    6110 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod7e3d43cf11f27df64ddec0bd25dc66e3/crio-8d97915e3f5ac04d97267d1cfa0098c5318fd7389140d8cc4902a2205187ba66: Error finding container 8d97915e3f5ac04d97267d1cfa0098c5318fd7389140d8cc4902a2205187ba66: Status 404 returned error can't find the container with id 8d97915e3f5ac04d97267d1cfa0098c5318fd7389140d8cc4902a2205187ba66
	Dec 19 02:40:18 functional-199791 kubelet[6110]: E1219 02:40:18.862371    6110 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766112018861965127  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:197523}  inodes_used:{value:92}}"
	Dec 19 02:40:18 functional-199791 kubelet[6110]: E1219 02:40:18.862407    6110 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766112018861965127  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:197523}  inodes_used:{value:92}}"
	Dec 19 02:40:21 functional-199791 kubelet[6110]: E1219 02:40:21.657527    6110 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-7lwrx" podUID="96192f1e-8144-4970-a848-961ca9d6a26b"
	Dec 19 02:40:28 functional-199791 kubelet[6110]: E1219 02:40:28.864956    6110 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766112028863897518  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:197523}  inodes_used:{value:92}}"
	Dec 19 02:40:28 functional-199791 kubelet[6110]: E1219 02:40:28.864993    6110 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766112028863897518  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:197523}  inodes_used:{value:92}}"
	Dec 19 02:40:38 functional-199791 kubelet[6110]: E1219 02:40:38.866320    6110 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766112038865907713  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:197523}  inodes_used:{value:92}}"
	Dec 19 02:40:38 functional-199791 kubelet[6110]: E1219 02:40:38.866345    6110 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766112038865907713  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:197523}  inodes_used:{value:92}}"
	Dec 19 02:40:40 functional-199791 kubelet[6110]: E1219 02:40:40.400722    6110 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 19 02:40:40 functional-199791 kubelet[6110]: E1219 02:40:40.400766    6110 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 19 02:40:40 functional-199791 kubelet[6110]: E1219 02:40:40.400977    6110 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-2gcm4_default(01c1e94b-24ce-4fae-a099-60f4a14f115a): ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 19 02:40:40 functional-199791 kubelet[6110]: E1219 02:40:40.401010    6110 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-2gcm4" podUID="01c1e94b-24ce-4fae-a099-60f4a14f115a"
	Dec 19 02:40:40 functional-199791 kubelet[6110]: E1219 02:40:40.625327    6110 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-2gcm4" podUID="01c1e94b-24ce-4fae-a099-60f4a14f115a"
	Dec 19 02:40:48 functional-199791 kubelet[6110]: E1219 02:40:48.869547    6110 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766112048868574452  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:197523}  inodes_used:{value:92}}"
	Dec 19 02:40:48 functional-199791 kubelet[6110]: E1219 02:40:48.869589    6110 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766112048868574452  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:197523}  inodes_used:{value:92}}"
	
	
	==> storage-provisioner [3075784cb0c1efff3c934fed9d726e8df01803e52a8cfe91ebf520417d3dfc95] <==
	I1219 02:34:39.397906       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 02:34:39.414358       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 02:34:39.414933       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1219 02:34:39.418048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:42.877145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:47.137842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:50.736020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:53.790160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:56.812611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:56.817421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 02:34:56.817536       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 02:34:56.817676       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-199791_d10e771c-be38-4def-81f0-9169b6a9faaa!
	I1219 02:34:56.818428       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"666199d8-8509-44d4-8b92-058a8c4f7820", APIVersion:"v1", ResourceVersion:"533", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-199791_d10e771c-be38-4def-81f0-9169b6a9faaa became leader
	W1219 02:34:56.820032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:56.828175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 02:34:56.918469       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-199791_d10e771c-be38-4def-81f0-9169b6a9faaa!
	W1219 02:34:58.831310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:58.838546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:00.843150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:00.851301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:02.855336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:02.860314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [4e4ecc916e87a11785556ec14c15e599641a328ac5360ee36fa978933c69d4a5] <==
	W1219 02:40:24.379778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:26.382564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:26.387518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:28.391053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:28.398560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:30.402374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:30.409090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:32.411757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:32.416471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:34.419502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:34.424559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:36.428590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:36.435134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:38.438525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:38.443085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:40.445541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:40.449637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:42.453671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:42.461852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:44.464615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:44.469062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:46.472093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:46.477006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:48.481998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:48.491757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-199791 -n functional-199791
helpers_test.go:270: (dbg) Run:  kubectl --context functional-199791 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-75c85bcc94-7lwrx hello-node-connect-7d85dfc575-2gcm4 mysql-6bcdcbc558-qcs65 kubernetes-dashboard-api-55487dd988-m95fb kubernetes-dashboard-auth-59779df8d5-n7zvj kubernetes-dashboard-kong-9849c64bd-shqrq kubernetes-dashboard-metrics-scraper-7685fd8b77-46x9w kubernetes-dashboard-web-5c9f966b98-jmj2w
helpers_test.go:283: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-199791 describe pod busybox-mount hello-node-75c85bcc94-7lwrx hello-node-connect-7d85dfc575-2gcm4 mysql-6bcdcbc558-qcs65 kubernetes-dashboard-api-55487dd988-m95fb kubernetes-dashboard-auth-59779df8d5-n7zvj kubernetes-dashboard-kong-9849c64bd-shqrq kubernetes-dashboard-metrics-scraper-7685fd8b77-46x9w kubernetes-dashboard-web-5c9f966b98-jmj2w
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-199791 describe pod busybox-mount hello-node-75c85bcc94-7lwrx hello-node-connect-7d85dfc575-2gcm4 mysql-6bcdcbc558-qcs65 kubernetes-dashboard-api-55487dd988-m95fb kubernetes-dashboard-auth-59779df8d5-n7zvj kubernetes-dashboard-kong-9849c64bd-shqrq kubernetes-dashboard-metrics-scraper-7685fd8b77-46x9w kubernetes-dashboard-web-5c9f966b98-jmj2w: exit status 1 (93.045534ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199791/192.168.39.97
	Start Time:       Fri, 19 Dec 2025 02:35:47 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://7a3c52996d6ef4428177d121063d57150d5c4de8dc99d3c67aa49e6528059cc0
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 19 Dec 2025 02:36:21 +0000
	      Finished:     Fri, 19 Dec 2025 02:36:21 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wp9z4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wp9z4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m1s   default-scheduler  Successfully assigned default/busybox-mount to functional-199791
	  Normal  Pulling    5m1s   kubelet            spec.containers{mount-munger}: Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m28s  kubelet            spec.containers{mount-munger}: Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.19s (33.462s including waiting). Image size: 4631262 bytes.
	  Normal  Created    4m28s  kubelet            spec.containers{mount-munger}: Created container: mount-munger
	  Normal  Started    4m28s  kubelet            spec.containers{mount-munger}: Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-7lwrx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199791/192.168.39.97
	Start Time:       Fri, 19 Dec 2025 02:35:45 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tx7cl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tx7cl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m4s                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7lwrx to functional-199791
	  Warning  Failed     4m31s                kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     41s (x2 over 4m31s)  kubelet            spec.containers{echo-server}: Error: ErrImagePull
	  Warning  Failed     41s                  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    28s (x2 over 4m30s)  kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     28s (x2 over 4m30s)  kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
	  Normal   Pulling    17s (x3 over 5m4s)   kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-2gcm4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199791/192.168.39.97
	Start Time:       Fri, 19 Dec 2025 02:36:36 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.15
	IPs:
	  IP:           10.244.0.15
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v8jgj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-v8jgj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age    From               Message
	  ----     ------     ----   ----               -------
	  Normal   Scheduled  4m12s  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-2gcm4 to functional-199791
	  Normal   Pulling    4m12s  kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
	  Warning  Failed     9s     kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     9s     kubelet            spec.containers{echo-server}: Error: ErrImagePull
	  Normal   BackOff    9s     kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     9s     kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
	
	
	Name:             mysql-6bcdcbc558-qcs65
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199791/192.168.39.97
	Start Time:       Fri, 19 Dec 2025 02:39:51 +0000
	Labels:           app=mysql
	                  pod-template-hash=6bcdcbc558
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-6bcdcbc558
	Containers:
	  mysql:
	    Container ID:   
	    Image:          public.ecr.aws/docker/library/mysql:8.4
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rssrc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rssrc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  57s   default-scheduler  Successfully assigned default/mysql-6bcdcbc558-qcs65 to functional-199791
	  Normal  Pulling    57s   kubelet            spec.containers{mysql}: Pulling image "public.ecr.aws/docker/library/mysql:8.4"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-api-55487dd988-m95fb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-auth-59779df8d5-n7zvj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-kong-9849c64bd-shqrq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-7685fd8b77-46x9w" not found
	Error from server (NotFound): pods "kubernetes-dashboard-web-5c9f966b98-jmj2w" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-199791 describe pod busybox-mount hello-node-75c85bcc94-7lwrx hello-node-connect-7d85dfc575-2gcm4 mysql-6bcdcbc558-qcs65 kubernetes-dashboard-api-55487dd988-m95fb kubernetes-dashboard-auth-59779df8d5-n7zvj kubernetes-dashboard-kong-9849c64bd-shqrq kubernetes-dashboard-metrics-scraper-7685fd8b77-46x9w kubernetes-dashboard-web-5c9f966b98-jmj2w: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (301.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-199791 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-199791 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-2gcm4" [01c1e94b-24ce-4fae-a099-60f4a14f115a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1219 02:37:49.625738    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:38:17.318711    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-199791 -n functional-199791
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-19 02:46:37.041319886 +0000 UTC m=+1300.079499198
functional_test.go:1645: (dbg) Run:  kubectl --context functional-199791 describe po hello-node-connect-7d85dfc575-2gcm4 -n default
functional_test.go:1645: (dbg) kubectl --context functional-199791 describe po hello-node-connect-7d85dfc575-2gcm4 -n default:
Name:             hello-node-connect-7d85dfc575-2gcm4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-199791/192.168.39.97
Start Time:       Fri, 19 Dec 2025 02:36:36 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.15
IPs:
IP:           10.244.0.15
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v8jgj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-v8jgj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-2gcm4 to functional-199791
Warning  Failed     5m57s                kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     77s (x2 over 5m57s)  kubelet            spec.containers{echo-server}: Error: ErrImagePull
Warning  Failed     77s                  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    65s (x2 over 5m57s)  kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
Warning  Failed     65s (x2 over 5m57s)  kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
Normal   Pulling    53s (x3 over 10m)    kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-199791 logs hello-node-connect-7d85dfc575-2gcm4 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-199791 logs hello-node-connect-7d85dfc575-2gcm4 -n default: exit status 1 (58.143294ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-2gcm4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-199791 logs hello-node-connect-7d85dfc575-2gcm4 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-199791 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-2gcm4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-199791/192.168.39.97
Start Time:       Fri, 19 Dec 2025 02:36:36 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.15
IPs:
IP:           10.244.0.15
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v8jgj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-v8jgj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-2gcm4 to functional-199791
Warning  Failed     5m57s                kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     77s (x2 over 5m57s)  kubelet            spec.containers{echo-server}: Error: ErrImagePull
Warning  Failed     77s                  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    65s (x2 over 5m57s)  kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
Warning  Failed     65s (x2 over 5m57s)  kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
Normal   Pulling    53s (x3 over 10m)    kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-199791 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-199791 logs -l app=hello-node-connect: exit status 1 (62.984458ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-2gcm4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-199791 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-199791 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.107.160.98
IPs:                      10.107.160.98
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30367/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-199791 -n functional-199791
helpers_test.go:253: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-199791 logs -n 25: (1.189542351s)
helpers_test.go:261: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-199791 image save --daemon kicbase/echo-server:functional-199791 --alsologtostderr          │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh            │ functional-199791 ssh sudo cat /etc/ssl/certs/8937.pem                                                 │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh            │ functional-199791 ssh sudo cat /usr/share/ca-certificates/8937.pem                                     │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh            │ functional-199791 ssh sudo cat /etc/ssl/certs/51391683.0                                               │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh            │ functional-199791 ssh sudo cat /etc/ssl/certs/89372.pem                                                │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh            │ functional-199791 ssh sudo cat /usr/share/ca-certificates/89372.pem                                    │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh            │ functional-199791 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                               │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ addons         │ functional-199791 addons list                                                                          │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ addons         │ functional-199791 addons list -o json                                                                  │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh            │ functional-199791 ssh sudo cat /etc/test/nested/copy/8937/hosts                                        │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:39 UTC │ 19 Dec 25 02:39 UTC │
	│ image          │ functional-199791 image ls --format short --alsologtostderr                                            │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ image          │ functional-199791 image ls --format yaml --alsologtostderr                                             │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ ssh            │ functional-199791 ssh pgrep buildkitd                                                                  │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │                     │
	│ image          │ functional-199791 image build -t localhost/my-image:functional-199791 testdata/build --alsologtostderr │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ image          │ functional-199791 image ls                                                                             │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ image          │ functional-199791 image ls --format json --alsologtostderr                                             │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ image          │ functional-199791 image ls --format table --alsologtostderr                                            │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ update-context │ functional-199791 update-context --alsologtostderr -v=2                                                │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ update-context │ functional-199791 update-context --alsologtostderr -v=2                                                │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ update-context │ functional-199791 update-context --alsologtostderr -v=2                                                │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ service        │ functional-199791 service list                                                                         │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:45 UTC │ 19 Dec 25 02:45 UTC │
	│ service        │ functional-199791 service list -o json                                                                 │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:45 UTC │ 19 Dec 25 02:45 UTC │
	│ service        │ functional-199791 service --namespace=default --https --url hello-node                                 │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:45 UTC │                     │
	│ service        │ functional-199791 service hello-node --url --format={{.IP}}                                            │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:45 UTC │                     │
	│ service        │ functional-199791 service hello-node --url                                                             │ functional-199791 │ jenkins │ v1.37.0 │ 19 Dec 25 02:45 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:35:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:35:47.431533   14641 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:35:47.431669   14641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:35:47.431680   14641 out.go:374] Setting ErrFile to fd 2...
	I1219 02:35:47.431687   14641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:35:47.431969   14641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 02:35:47.432515   14641 out.go:368] Setting JSON to false
	I1219 02:35:47.433644   14641 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1091,"bootTime":1766110656,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:35:47.433708   14641 start.go:143] virtualization: kvm guest
	I1219 02:35:47.436677   14641 out.go:179] * [functional-199791] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:35:47.437685   14641 notify.go:221] Checking for updates...
	I1219 02:35:47.437691   14641 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:35:47.438935   14641 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:35:47.439938   14641 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 02:35:47.441135   14641 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 02:35:47.442105   14641 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:35:47.443007   14641 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:35:47.444464   14641 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:35:47.445071   14641 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:35:47.479179   14641 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 02:35:47.480239   14641 start.go:309] selected driver: kvm2
	I1219 02:35:47.480254   14641 start.go:928] validating driver "kvm2" against &{Name:functional-199791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-199791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:35:47.480378   14641 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:35:47.481714   14641 cni.go:84] Creating CNI manager for ""
	I1219 02:35:47.481800   14641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 02:35:47.481873   14641 start.go:353] cluster config:
	{Name:functional-199791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-199791 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:35:47.483729   14641 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.902757676Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766112397902735259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243802,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=475b1708-5468-489e-8b21-a8decabed65c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.903500401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db14d598-ab70-490e-8098-49ce653c0faf name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.903570786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db14d598-ab70-490e-8098-49ce653c0faf name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.903915347Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92508c1f3c0c6c5f80cb413aeae2f9de1c50704b399d3df0278b80a31e4b8078,PodSandboxId:543367912b4d98459500e7be9628a00450af41167109a43ec37d5739717f463d,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766112242844702910,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-qcs65,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b0bf0475-2199-4014-9972-16c0ce9f2b22,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:915300e33a8cd61c5f14bed9cadca6c9b66e6cfec608f7a40eb81b8c22445bc1,PodSandboxId:4cbdf2f437de9b7273e49b5dc8595eb4415bc58bb273c53514f2f223a8ce0565,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766111986017570016,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bb2654b-9961-46d3-b74f-9fbbcb8c3dc8,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a3c52996d6ef4428177d121063d57150d5c4de8dc99d3c67aa49e6528059cc0,PodSandboxId:4331c5201705a45b19bb3074d574f1e4e878066a8459f01541c8330d89b143e7,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766111781760253534,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22cf93c9-d4ef-4b69-aa11-9269bc23bee3,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4ecc916e87a11785556ec14c15e599641a328ac5360ee36fa978933c69d4a5,PodSandboxId:55ace58bc419cdf89c7b5f891a67b33aa94e08b0650fb0346336f7685e19670a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766111722971085720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009e7ad8-75b8-4205-91aa-980d65bb83a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kub
ernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dd47506e6db4c1c5bf0105d048007e7b6fce1395f5974e14bbc518c4faf04a,PodSandboxId:874c35fdc7a5c34d400476afb453084421777d1f710b1d675a3d383f0aaa7eaa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766111722942476552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5j8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d4a03-e8eb-4d11-bafc-d47ea5ede72e,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52a049ecd0fb48ec7198d01e8e295c6fb2a80e42ca2858b2d1b4c8077a828d55,PodSandboxId:09f936bd1f80b0df8536cee11289c78180e5f6b332e58d57e72fb7465096d86e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766111722986772477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8tvs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a0f0e2-fe99-419e-a874-319cfe3e8dd7,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46f0d2f8a60f0dbd158a46e846583eb03edc2033ef005567b93ea2262aa4fa05,PodSandboxId:7f5f836c530bc91e6907a850cf150a97eef32e673540fd842d2fd5070f192d36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766111719544120834,Labe
ls:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64a96f1a0516105e4ebd728fb48b0648,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa115163640dc4719c1124afeb14c79fa01b41acf029663ee8bb4e570e78c375,PodSandboxId:ceba3cafce51714487340983f595fbeb46099c81d2f775a985fdfc62749c49cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b2
5d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766111719332457450,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c51f768ca9cef53541023863070d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d903204735fb13ce21d9062ff0eb732ebe095fac0f7280517e6dd00c7ef9bd4e,PodSandboxId:e71c5e6505c8201b70109946b6c082cb4858fcf09de3d2f4913f2f8b80bd931c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682
b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766111719292096637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d29b63d26a2dfe9948639b49d1769e8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338f5e7eb5c69c5e72e44dc7916b142d97aaeaf9db504e426172bbb53a6d36eb,PodSandboxId:58e184d7505e42d78b545b795c3856aebe5a8d366597ea01932725c275ec
8e05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766111716811044040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e3d43cf11f27df64ddec0bd25dc66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58657d9eea63330c183895d19b56ddc23095634d978e03443079491a1639
f28b,PodSandboxId:654c1b461cf348f51f9f5081ee7d0a33524193a5db776507a0fa699a128fc2d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766111679289177450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5j8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d4a03-e8eb-4d11-bafc-d47ea5ede72e,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85725cf13093dada9e4553872d01e38d79622fc88f2912555c64a806f347b8c4,PodSandboxId:88a84164bc70bcb8
5b52eba377e96b2b29ec6ddf471223d4156de7b0eede6597,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766111679298440696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8tvs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a0f0e2-fe99-419e-a874-319cfe3e8dd7,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"
protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3075784cb0c1efff3c934fed9d726e8df01803e52a8cfe91ebf520417d3dfc95,PodSandboxId:e1d059d70d36e2c4c7d5370b5977a7ba91f8fbe8b2bb4509c589ebc9050e747d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766111679269407776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009e7ad8-75b8-4205-91aa-980d65bb83a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee52ab239e495b06e0329d9e7d462a9622090d0b3752eeed407bdedbf468603,PodSandboxId:19e4328300f57681fd9b1ecb6db8edf2f79368681bf85ed161a36a6261a1ad81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766111653480503920,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d29b63d26a2dfe9948639b49d1769e8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb15be933a8593073b861a6d6cdf247d04730954732953f38f8926b908ce48ad,PodSandboxId:e5bbcc1d5d42b6c926b19ae16686fd957eb33b577c94f8d99f43ba93a7ce3004,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766111653427178217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-199791,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: f0c51f768ca9cef53541023863070d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4284e1305c336fc0d68e458a9d6ce69c58edf45c2dcc81f806e69bd26c7e60,PodSandboxId:8d97915e3f5ac04d97267d1cfa0098c5318fd7389140d8cc4902a2205187ba66,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766111653351343328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e3d43cf11f27df64ddec0bd25dc66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db14d598-ab70-490e-8098-49ce653c0faf name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.943547747Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d049874-8583-46eb-84dd-8897be1f93c9 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.943729351Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d049874-8583-46eb-84dd-8897be1f93c9 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.945184150Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dae817f5-58b9-4611-b7ed-1dd22a89bec0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.945989453Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766112397945966623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243802,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dae817f5-58b9-4611-b7ed-1dd22a89bec0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.946877398Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=347017eb-3048-42a5-9209-3efad755c12b name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.946929617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=347017eb-3048-42a5-9209-3efad755c12b name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.947223157Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92508c1f3c0c6c5f80cb413aeae2f9de1c50704b399d3df0278b80a31e4b8078,PodSandboxId:543367912b4d98459500e7be9628a00450af41167109a43ec37d5739717f463d,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766112242844702910,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-qcs65,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b0bf0475-2199-4014-9972-16c0ce9f2b22,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:915300e33a8cd61c5f14bed9cadca6c9b66e6cfec608f7a40eb81b8c22445bc1,PodSandboxId:4cbdf2f437de9b7273e49b5dc8595eb4415bc58bb273c53514f2f223a8ce0565,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766111986017570016,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bb2654b-9961-46d3-b74f-9fbbcb8c3dc8,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a3c52996d6ef4428177d121063d57150d5c4de8dc99d3c67aa49e6528059cc0,PodSandboxId:4331c5201705a45b19bb3074d574f1e4e878066a8459f01541c8330d89b143e7,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766111781760253534,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22cf93c9-d4ef-4b69-aa11-9269bc23bee3,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4ecc916e87a11785556ec14c15e599641a328ac5360ee36fa978933c69d4a5,PodSandboxId:55ace58bc419cdf89c7b5f891a67b33aa94e08b0650fb0346336f7685e19670a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766111722971085720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009e7ad8-75b8-4205-91aa-980d65bb83a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kub
ernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dd47506e6db4c1c5bf0105d048007e7b6fce1395f5974e14bbc518c4faf04a,PodSandboxId:874c35fdc7a5c34d400476afb453084421777d1f710b1d675a3d383f0aaa7eaa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766111722942476552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5j8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d4a03-e8eb-4d11-bafc-d47ea5ede72e,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52a049ecd0fb48ec7198d01e8e295c6fb2a80e42ca2858b2d1b4c8077a828d55,PodSandboxId:09f936bd1f80b0df8536cee11289c78180e5f6b332e58d57e72fb7465096d86e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766111722986772477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8tvs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a0f0e2-fe99-419e-a874-319cfe3e8dd7,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46f0d2f8a60f0dbd158a46e846583eb03edc2033ef005567b93ea2262aa4fa05,PodSandboxId:7f5f836c530bc91e6907a850cf150a97eef32e673540fd842d2fd5070f192d36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766111719544120834,Labe
ls:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64a96f1a0516105e4ebd728fb48b0648,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa115163640dc4719c1124afeb14c79fa01b41acf029663ee8bb4e570e78c375,PodSandboxId:ceba3cafce51714487340983f595fbeb46099c81d2f775a985fdfc62749c49cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b2
5d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766111719332457450,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c51f768ca9cef53541023863070d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d903204735fb13ce21d9062ff0eb732ebe095fac0f7280517e6dd00c7ef9bd4e,PodSandboxId:e71c5e6505c8201b70109946b6c082cb4858fcf09de3d2f4913f2f8b80bd931c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682
b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766111719292096637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d29b63d26a2dfe9948639b49d1769e8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338f5e7eb5c69c5e72e44dc7916b142d97aaeaf9db504e426172bbb53a6d36eb,PodSandboxId:58e184d7505e42d78b545b795c3856aebe5a8d366597ea01932725c275ec
8e05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766111716811044040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e3d43cf11f27df64ddec0bd25dc66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58657d9eea63330c183895d19b56ddc23095634d978e03443079491a1639
f28b,PodSandboxId:654c1b461cf348f51f9f5081ee7d0a33524193a5db776507a0fa699a128fc2d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766111679289177450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5j8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d4a03-e8eb-4d11-bafc-d47ea5ede72e,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85725cf13093dada9e4553872d01e38d79622fc88f2912555c64a806f347b8c4,PodSandboxId:88a84164bc70bcb8
5b52eba377e96b2b29ec6ddf471223d4156de7b0eede6597,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766111679298440696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8tvs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a0f0e2-fe99-419e-a874-319cfe3e8dd7,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"
protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3075784cb0c1efff3c934fed9d726e8df01803e52a8cfe91ebf520417d3dfc95,PodSandboxId:e1d059d70d36e2c4c7d5370b5977a7ba91f8fbe8b2bb4509c589ebc9050e747d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766111679269407776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009e7ad8-75b8-4205-91aa-980d65bb83a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee52ab239e495b06e0329d9e7d462a9622090d0b3752eeed407bdedbf468603,PodSandboxId:19e4328300f57681fd9b1ecb6db8edf2f79368681bf85ed161a36a6261a1ad81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766111653480503920,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d29b63d26a2dfe9948639b49d1769e8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb15be933a8593073b861a6d6cdf247d04730954732953f38f8926b908ce48ad,PodSandboxId:e5bbcc1d5d42b6c926b19ae16686fd957eb33b577c94f8d99f43ba93a7ce3004,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766111653427178217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-199791,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: f0c51f768ca9cef53541023863070d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4284e1305c336fc0d68e458a9d6ce69c58edf45c2dcc81f806e69bd26c7e60,PodSandboxId:8d97915e3f5ac04d97267d1cfa0098c5318fd7389140d8cc4902a2205187ba66,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766111653351343328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e3d43cf11f27df64ddec0bd25dc66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=347017eb-3048-42a5-9209-3efad755c12b name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.985071738Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93fa59d4-4b4f-4132-acd9-8f88e63d7cdc name=/runtime.v1.RuntimeService/Version
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.985289564Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93fa59d4-4b4f-4132-acd9-8f88e63d7cdc name=/runtime.v1.RuntimeService/Version
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.986846821Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ce2886f-53ef-40fd-b4bf-fc1149e008b7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.987520982Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766112397987499890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243802,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ce2886f-53ef-40fd-b4bf-fc1149e008b7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.988549241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6d726a1-9754-4371-8107-96da00585764 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.988616884Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6d726a1-9754-4371-8107-96da00585764 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:46:37 functional-199791 crio[5342]: time="2025-12-19 02:46:37.989077149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92508c1f3c0c6c5f80cb413aeae2f9de1c50704b399d3df0278b80a31e4b8078,PodSandboxId:543367912b4d98459500e7be9628a00450af41167109a43ec37d5739717f463d,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766112242844702910,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-qcs65,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b0bf0475-2199-4014-9972-16c0ce9f2b22,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:915300e33a8cd61c5f14bed9cadca6c9b66e6cfec608f7a40eb81b8c22445bc1,PodSandboxId:4cbdf2f437de9b7273e49b5dc8595eb4415bc58bb273c53514f2f223a8ce0565,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766111986017570016,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bb2654b-9961-46d3-b74f-9fbbcb8c3dc8,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a3c52996d6ef4428177d121063d57150d5c4de8dc99d3c67aa49e6528059cc0,PodSandboxId:4331c5201705a45b19bb3074d574f1e4e878066a8459f01541c8330d89b143e7,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766111781760253534,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22cf93c9-d4ef-4b69-aa11-9269bc23bee3,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4ecc916e87a11785556ec14c15e599641a328ac5360ee36fa978933c69d4a5,PodSandboxId:55ace58bc419cdf89c7b5f891a67b33aa94e08b0650fb0346336f7685e19670a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766111722971085720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009e7ad8-75b8-4205-91aa-980d65bb83a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kub
ernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dd47506e6db4c1c5bf0105d048007e7b6fce1395f5974e14bbc518c4faf04a,PodSandboxId:874c35fdc7a5c34d400476afb453084421777d1f710b1d675a3d383f0aaa7eaa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766111722942476552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5j8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d4a03-e8eb-4d11-bafc-d47ea5ede72e,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52a049ecd0fb48ec7198d01e8e295c6fb2a80e42ca2858b2d1b4c8077a828d55,PodSandboxId:09f936bd1f80b0df8536cee11289c78180e5f6b332e58d57e72fb7465096d86e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766111722986772477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8tvs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a0f0e2-fe99-419e-a874-319cfe3e8dd7,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46f0d2f8a60f0dbd158a46e846583eb03edc2033ef005567b93ea2262aa4fa05,PodSandboxId:7f5f836c530bc91e6907a850cf150a97eef32e673540fd842d2fd5070f192d36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766111719544120834,Labe
ls:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64a96f1a0516105e4ebd728fb48b0648,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa115163640dc4719c1124afeb14c79fa01b41acf029663ee8bb4e570e78c375,PodSandboxId:ceba3cafce51714487340983f595fbeb46099c81d2f775a985fdfc62749c49cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b2
5d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766111719332457450,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c51f768ca9cef53541023863070d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d903204735fb13ce21d9062ff0eb732ebe095fac0f7280517e6dd00c7ef9bd4e,PodSandboxId:e71c5e6505c8201b70109946b6c082cb4858fcf09de3d2f4913f2f8b80bd931c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682
b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766111719292096637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d29b63d26a2dfe9948639b49d1769e8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338f5e7eb5c69c5e72e44dc7916b142d97aaeaf9db504e426172bbb53a6d36eb,PodSandboxId:58e184d7505e42d78b545b795c3856aebe5a8d366597ea01932725c275ec
8e05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766111716811044040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e3d43cf11f27df64ddec0bd25dc66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58657d9eea63330c183895d19b56ddc23095634d978e03443079491a1639
f28b,PodSandboxId:654c1b461cf348f51f9f5081ee7d0a33524193a5db776507a0fa699a128fc2d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766111679289177450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5j8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d4a03-e8eb-4d11-bafc-d47ea5ede72e,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85725cf13093dada9e4553872d01e38d79622fc88f2912555c64a806f347b8c4,PodSandboxId:88a84164bc70bcb8
5b52eba377e96b2b29ec6ddf471223d4156de7b0eede6597,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766111679298440696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8tvs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a0f0e2-fe99-419e-a874-319cfe3e8dd7,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"
protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3075784cb0c1efff3c934fed9d726e8df01803e52a8cfe91ebf520417d3dfc95,PodSandboxId:e1d059d70d36e2c4c7d5370b5977a7ba91f8fbe8b2bb4509c589ebc9050e747d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766111679269407776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009e7ad8-75b8-4205-91aa-980d65bb83a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee52ab239e495b06e0329d9e7d462a9622090d0b3752eeed407bdedbf468603,PodSandboxId:19e4328300f57681fd9b1ecb6db8edf2f79368681bf85ed161a36a6261a1ad81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766111653480503920,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d29b63d26a2dfe9948639b49d1769e8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb15be933a8593073b861a6d6cdf247d04730954732953f38f8926b908ce48ad,PodSandboxId:e5bbcc1d5d42b6c926b19ae16686fd957eb33b577c94f8d99f43ba93a7ce3004,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766111653427178217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-199791,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: f0c51f768ca9cef53541023863070d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4284e1305c336fc0d68e458a9d6ce69c58edf45c2dcc81f806e69bd26c7e60,PodSandboxId:8d97915e3f5ac04d97267d1cfa0098c5318fd7389140d8cc4902a2205187ba66,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766111653351343328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e3d43cf11f27df64ddec0bd25dc66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6d726a1-9754-4371-8107-96da00585764 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:46:38 functional-199791 crio[5342]: time="2025-12-19 02:46:38.017559386Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d56fbcb6-af3e-41c8-b2a4-ebc97d2524eb name=/runtime.v1.RuntimeService/Version
	Dec 19 02:46:38 functional-199791 crio[5342]: time="2025-12-19 02:46:38.017642355Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d56fbcb6-af3e-41c8-b2a4-ebc97d2524eb name=/runtime.v1.RuntimeService/Version
	Dec 19 02:46:38 functional-199791 crio[5342]: time="2025-12-19 02:46:38.018912048Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afc45191-c433-41ef-9a5f-408920a1432e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:46:38 functional-199791 crio[5342]: time="2025-12-19 02:46:38.020175263Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766112398020151633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243802,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afc45191-c433-41ef-9a5f-408920a1432e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:46:38 functional-199791 crio[5342]: time="2025-12-19 02:46:38.021178027Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=599bc068-0446-4078-a21a-4e2dbb6459a3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:46:38 functional-199791 crio[5342]: time="2025-12-19 02:46:38.021257339Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=599bc068-0446-4078-a21a-4e2dbb6459a3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:46:38 functional-199791 crio[5342]: time="2025-12-19 02:46:38.021583195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92508c1f3c0c6c5f80cb413aeae2f9de1c50704b399d3df0278b80a31e4b8078,PodSandboxId:543367912b4d98459500e7be9628a00450af41167109a43ec37d5739717f463d,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766112242844702910,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-qcs65,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b0bf0475-2199-4014-9972-16c0ce9f2b22,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:915300e33a8cd61c5f14bed9cadca6c9b66e6cfec608f7a40eb81b8c22445bc1,PodSandboxId:4cbdf2f437de9b7273e49b5dc8595eb4415bc58bb273c53514f2f223a8ce0565,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766111986017570016,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bb2654b-9961-46d3-b74f-9fbbcb8c3dc8,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a3c52996d6ef4428177d121063d57150d5c4de8dc99d3c67aa49e6528059cc0,PodSandboxId:4331c5201705a45b19bb3074d574f1e4e878066a8459f01541c8330d89b143e7,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766111781760253534,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22cf93c9-d4ef-4b69-aa11-9269bc23bee3,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4ecc916e87a11785556ec14c15e599641a328ac5360ee36fa978933c69d4a5,PodSandboxId:55ace58bc419cdf89c7b5f891a67b33aa94e08b0650fb0346336f7685e19670a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766111722971085720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009e7ad8-75b8-4205-91aa-980d65bb83a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kub
ernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dd47506e6db4c1c5bf0105d048007e7b6fce1395f5974e14bbc518c4faf04a,PodSandboxId:874c35fdc7a5c34d400476afb453084421777d1f710b1d675a3d383f0aaa7eaa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766111722942476552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5j8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d4a03-e8eb-4d11-bafc-d47ea5ede72e,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52a049ecd0fb48ec7198d01e8e295c6fb2a80e42ca2858b2d1b4c8077a828d55,PodSandboxId:09f936bd1f80b0df8536cee11289c78180e5f6b332e58d57e72fb7465096d86e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766111722986772477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8tvs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a0f0e2-fe99-419e-a874-319cfe3e8dd7,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46f0d2f8a60f0dbd158a46e846583eb03edc2033ef005567b93ea2262aa4fa05,PodSandboxId:7f5f836c530bc91e6907a850cf150a97eef32e673540fd842d2fd5070f192d36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766111719544120834,Labe
ls:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64a96f1a0516105e4ebd728fb48b0648,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa115163640dc4719c1124afeb14c79fa01b41acf029663ee8bb4e570e78c375,PodSandboxId:ceba3cafce51714487340983f595fbeb46099c81d2f775a985fdfc62749c49cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b2
5d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766111719332457450,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c51f768ca9cef53541023863070d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d903204735fb13ce21d9062ff0eb732ebe095fac0f7280517e6dd00c7ef9bd4e,PodSandboxId:e71c5e6505c8201b70109946b6c082cb4858fcf09de3d2f4913f2f8b80bd931c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682
b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766111719292096637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d29b63d26a2dfe9948639b49d1769e8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338f5e7eb5c69c5e72e44dc7916b142d97aaeaf9db504e426172bbb53a6d36eb,PodSandboxId:58e184d7505e42d78b545b795c3856aebe5a8d366597ea01932725c275ec
8e05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766111716811044040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e3d43cf11f27df64ddec0bd25dc66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58657d9eea63330c183895d19b56ddc23095634d978e03443079491a1639
f28b,PodSandboxId:654c1b461cf348f51f9f5081ee7d0a33524193a5db776507a0fa699a128fc2d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766111679289177450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5j8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d4a03-e8eb-4d11-bafc-d47ea5ede72e,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85725cf13093dada9e4553872d01e38d79622fc88f2912555c64a806f347b8c4,PodSandboxId:88a84164bc70bcb8
5b52eba377e96b2b29ec6ddf471223d4156de7b0eede6597,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766111679298440696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8tvs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a0f0e2-fe99-419e-a874-319cfe3e8dd7,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"
protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3075784cb0c1efff3c934fed9d726e8df01803e52a8cfe91ebf520417d3dfc95,PodSandboxId:e1d059d70d36e2c4c7d5370b5977a7ba91f8fbe8b2bb4509c589ebc9050e747d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766111679269407776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009e7ad8-75b8-4205-91aa-980d65bb83a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee52ab239e495b06e0329d9e7d462a9622090d0b3752eeed407bdedbf468603,PodSandboxId:19e4328300f57681fd9b1ecb6db8edf2f79368681bf85ed161a36a6261a1ad81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766111653480503920,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d29b63d26a2dfe9948639b49d1769e8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb15be933a8593073b861a6d6cdf247d04730954732953f38f8926b908ce48ad,PodSandboxId:e5bbcc1d5d42b6c926b19ae16686fd957eb33b577c94f8d99f43ba93a7ce3004,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766111653427178217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-199791,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: f0c51f768ca9cef53541023863070d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4284e1305c336fc0d68e458a9d6ce69c58edf45c2dcc81f806e69bd26c7e60,PodSandboxId:8d97915e3f5ac04d97267d1cfa0098c5318fd7389140d8cc4902a2205187ba66,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766111653351343328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-199791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e3d43cf11f27df64ddec0bd25dc66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=599bc068-0446-4078-a21a-4e2dbb6459a3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	92508c1f3c0c6       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   2 minutes ago       Running             mysql                     0                   543367912b4d9       mysql-6bcdcbc558-qcs65                      default
	915300e33a8cd       04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5                                              6 minutes ago       Running             myfrontend                0                   4cbdf2f437de9       sp-pod                                      default
	7a3c52996d6ef       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           10 minutes ago      Exited              mount-munger              0                   4331c5201705a       busybox-mount                               default
	52a049ecd0fb4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              11 minutes ago      Running             coredns                   2                   09f936bd1f80b       coredns-66bc5c9577-8tvs6                    kube-system
	4e4ecc916e87a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              11 minutes ago      Running             storage-provisioner       3                   55ace58bc419c       storage-provisioner                         kube-system
	14dd47506e6db       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                              11 minutes ago      Running             kube-proxy                2                   874c35fdc7a5c       kube-proxy-m5j8g                            kube-system
	46f0d2f8a60f0       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                              11 minutes ago      Running             kube-apiserver            0                   7f5f836c530bc       kube-apiserver-functional-199791            kube-system
	fa115163640dc       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                              11 minutes ago      Running             kube-controller-manager   2                   ceba3cafce517       kube-controller-manager-functional-199791   kube-system
	d903204735fb1       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                              11 minutes ago      Running             kube-scheduler            2                   e71c5e6505c82       kube-scheduler-functional-199791            kube-system
	338f5e7eb5c69       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              11 minutes ago      Running             etcd                      2                   58e184d7505e4       etcd-functional-199791                      kube-system
	85725cf13093d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              11 minutes ago      Exited              coredns                   1                   88a84164bc70b       coredns-66bc5c9577-8tvs6                    kube-system
	58657d9eea633       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                              11 minutes ago      Exited              kube-proxy                1                   654c1b461cf34       kube-proxy-m5j8g                            kube-system
	3075784cb0c1e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              11 minutes ago      Exited              storage-provisioner       2                   e1d059d70d36e       storage-provisioner                         kube-system
	bee52ab239e49       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                              12 minutes ago      Exited              kube-scheduler            1                   19e4328300f57       kube-scheduler-functional-199791            kube-system
	eb15be933a859       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                              12 minutes ago      Exited              kube-controller-manager   1                   e5bbcc1d5d42b       kube-controller-manager-functional-199791   kube-system
	cd4284e1305c3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              12 minutes ago      Exited              etcd                      1                   8d97915e3f5ac       etcd-functional-199791                      kube-system
	
	
	==> coredns [52a049ecd0fb48ec7198d01e8e295c6fb2a80e42ca2858b2d1b4c8077a828d55] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45039 - 52469 "HINFO IN 1968621207169872046.2815946503348017369. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.013061154s
	
	
	==> coredns [85725cf13093dada9e4553872d01e38d79622fc88f2912555c64a806f347b8c4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45018 - 36548 "HINFO IN 5046039487923452892.2455866171052880201. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063275704s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-199791
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-199791
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=functional-199791
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T02_33_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 02:33:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-199791
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 02:46:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 02:44:23 +0000   Fri, 19 Dec 2025 02:33:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 02:44:23 +0000   Fri, 19 Dec 2025 02:33:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 02:44:23 +0000   Fri, 19 Dec 2025 02:33:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 02:44:23 +0000   Fri, 19 Dec 2025 02:33:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    functional-199791
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 234b0954db5c424884f9cda9cc9df33d
	  System UUID:                234b0954-db5c-4248-84f9-cda9cc9df33d
	  Boot ID:                    da2082df-9ed6-451c-a2bd-b19411e49feb
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7lwrx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-2gcm4                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-6bcdcbc558-qcs65                                   600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    6m47s
	  default                     sp-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m53s
	  kube-system                 coredns-66bc5c9577-8tvs6                                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-199791                                   100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-199791                         250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-199791                200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-m5j8g                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-199791                         100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        kubernetes-dashboard-api-55487dd988-m95fb                100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    10m
	  kubernetes-dashboard        kubernetes-dashboard-auth-59779df8d5-n7zvj               100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    10m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-shqrq                0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-46x9w    100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    10m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-jmj2w                100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1750m (87%)   1700m (85%)
	  memory             1482Mi (37%)  2470Mi (63%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node functional-199791 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node functional-199791 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node functional-199791 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-199791 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-199791 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-199791 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeReady                12m                kubelet          Node functional-199791 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node functional-199791 event: Registered Node functional-199791 in Controller
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-199791 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-199791 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-199791 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-199791 event: Registered Node functional-199791 in Controller
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-199791 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-199791 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-199791 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-199791 event: Registered Node functional-199791 in Controller
	
	
	==> dmesg <==
	[  +1.178847] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083161] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.103253] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.119955] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.402477] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.562650] kauditd_printk_skb: 248 callbacks suppressed
	[Dec19 02:34] kauditd_printk_skb: 45 callbacks suppressed
	[  +7.285406] kauditd_printk_skb: 56 callbacks suppressed
	[ +20.521325] kauditd_printk_skb: 277 callbacks suppressed
	[  +1.896560] kauditd_printk_skb: 57 callbacks suppressed
	[Dec19 02:35] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.828763] kauditd_printk_skb: 78 callbacks suppressed
	[  +4.325250] kauditd_printk_skb: 281 callbacks suppressed
	[ +17.406952] kauditd_printk_skb: 51 callbacks suppressed
	[  +0.000034] kauditd_printk_skb: 133 callbacks suppressed
	[  +0.502326] kauditd_printk_skb: 197 callbacks suppressed
	[Dec19 02:36] kauditd_printk_skb: 47 callbacks suppressed
	[  +1.002971] kauditd_printk_skb: 29 callbacks suppressed
	[ +16.463942] kauditd_printk_skb: 38 callbacks suppressed
	[Dec19 02:39] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.054527] kauditd_printk_skb: 61 callbacks suppressed
	[Dec19 02:40] kauditd_printk_skb: 38 callbacks suppressed
	[  +8.998021] crun[10134]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [338f5e7eb5c69c5e72e44dc7916b142d97aaeaf9db504e426172bbb53a6d36eb] <==
	{"level":"warn","ts":"2025-12-19T02:43:59.973008Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.90245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:43:59.973031Z","caller":"traceutil/trace.go:172","msg":"trace[795864923] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1408; }","duration":"153.935ms","start":"2025-12-19T02:43:59.819089Z","end":"2025-12-19T02:43:59.973024Z","steps":["trace[795864923] 'agreement among raft nodes before linearized reading'  (duration: 153.848086ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:43:59.973080Z","caller":"traceutil/trace.go:172","msg":"trace[1851852010] transaction","detail":"{read_only:false; response_revision:1409; number_of_response:1; }","duration":"256.345441ms","start":"2025-12-19T02:43:59.716724Z","end":"2025-12-19T02:43:59.973070Z","steps":["trace[1851852010] 'process raft request'  (duration: 255.669588ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:44:02.471608Z","caller":"traceutil/trace.go:172","msg":"trace[422695900] linearizableReadLoop","detail":"{readStateIndex:1587; appliedIndex:1587; }","duration":"412.622902ms","start":"2025-12-19T02:44:02.058970Z","end":"2025-12-19T02:44:02.471593Z","steps":["trace[422695900] 'read index received'  (duration: 412.617405ms)","trace[422695900] 'applied index is now lower than readState.Index'  (duration: 4.748µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T02:44:02.471739Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"412.769184ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:44:02.471760Z","caller":"traceutil/trace.go:172","msg":"trace[858585964] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1409; }","duration":"412.800919ms","start":"2025-12-19T02:44:02.058953Z","end":"2025-12-19T02:44:02.471754Z","steps":["trace[858585964] 'agreement among raft nodes before linearized reading'  (duration: 412.740086ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:44:02.471827Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T02:44:02.058920Z","time spent":"412.855872ms","remote":"127.0.0.1:41782","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-12-19T02:44:02.471962Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.814415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:44:02.472173Z","caller":"traceutil/trace.go:172","msg":"trace[1284434976] transaction","detail":"{read_only:false; response_revision:1410; number_of_response:1; }","duration":"489.08942ms","start":"2025-12-19T02:44:01.983073Z","end":"2025-12-19T02:44:02.472163Z","steps":["trace[1284434976] 'process raft request'  (duration: 488.652844ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:44:02.473542Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T02:44:01.983052Z","time spent":"489.16167ms","remote":"127.0.0.1:42120","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1409 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-12-19T02:44:02.473705Z","caller":"traceutil/trace.go:172","msg":"trace[1421226766] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1410; }","duration":"105.888516ms","start":"2025-12-19T02:44:02.366106Z","end":"2025-12-19T02:44:02.471995Z","steps":["trace[1421226766] 'agreement among raft nodes before linearized reading'  (duration: 105.79437ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:44:09.007755Z","caller":"traceutil/trace.go:172","msg":"trace[1730384954] linearizableReadLoop","detail":"{readStateIndex:1605; appliedIndex:1605; }","duration":"256.75946ms","start":"2025-12-19T02:44:08.750979Z","end":"2025-12-19T02:44:09.007738Z","steps":["trace[1730384954] 'read index received'  (duration: 256.75313ms)","trace[1730384954] 'applied index is now lower than readState.Index'  (duration: 4.946µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T02:44:09.008065Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"256.957391ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:44:09.008113Z","caller":"traceutil/trace.go:172","msg":"trace[1930144265] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:1426; }","duration":"257.138155ms","start":"2025-12-19T02:44:08.750964Z","end":"2025-12-19T02:44:09.008102Z","steps":["trace[1930144265] 'agreement among raft nodes before linearized reading'  (duration: 256.937966ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:44:09.010127Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"214.037317ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configuration.konghq.com/kongupstreampolicies\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:44:09.010852Z","caller":"traceutil/trace.go:172","msg":"trace[1439707217] range","detail":"{range_begin:/registry/configuration.konghq.com/kongupstreampolicies; range_end:; response_count:0; response_revision:1427; }","duration":"214.696344ms","start":"2025-12-19T02:44:08.796075Z","end":"2025-12-19T02:44:09.010772Z","steps":["trace[1439707217] 'agreement among raft nodes before linearized reading'  (duration: 214.019207ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:44:09.011304Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.057736ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:44:09.011925Z","caller":"traceutil/trace.go:172","msg":"trace[464886620] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1427; }","duration":"192.681688ms","start":"2025-12-19T02:44:08.819234Z","end":"2025-12-19T02:44:09.011916Z","steps":["trace[464886620] 'agreement among raft nodes before linearized reading'  (duration: 192.037521ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:44:09.012296Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.759228ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:44:09.012353Z","caller":"traceutil/trace.go:172","msg":"trace[1387153206] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1427; }","duration":"105.82155ms","start":"2025-12-19T02:44:08.906525Z","end":"2025-12-19T02:44:09.012347Z","steps":["trace[1387153206] 'agreement among raft nodes before linearized reading'  (duration: 105.743318ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:44:09.010690Z","caller":"traceutil/trace.go:172","msg":"trace[591124303] transaction","detail":"{read_only:false; response_revision:1427; number_of_response:1; }","duration":"478.46587ms","start":"2025-12-19T02:44:08.532214Z","end":"2025-12-19T02:44:09.010680Z","steps":["trace[591124303] 'process raft request'  (duration: 476.102861ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:44:09.014549Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T02:44:08.532200Z","time spent":"482.303174ms","remote":"127.0.0.1:42120","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1426 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-12-19T02:45:20.527457Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1191}
	{"level":"info","ts":"2025-12-19T02:45:20.556521Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1191,"took":"28.065485ms","hash":3149328947,"current-db-size-bytes":4468736,"current-db-size":"4.5 MB","current-db-size-in-use-bytes":2170880,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-12-19T02:45:20.556576Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3149328947,"revision":1191,"compact-revision":-1}
	
	
	==> etcd [cd4284e1305c336fc0d68e458a9d6ce69c58edf45c2dcc81f806e69bd26c7e60] <==
	{"level":"warn","ts":"2025-12-19T02:34:38.124414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:34:38.133292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:34:38.140842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:34:38.150636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:34:38.156220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:34:38.163970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:34:38.218090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43278","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T02:35:04.373005Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-19T02:35:04.373101Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-199791","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"]}
	{"level":"error","ts":"2025-12-19T02:35:04.373204Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:35:04.446479Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:35:04.447903Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:35:04.448008Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f61fae125a956d36","current-leader-member-id":"f61fae125a956d36"}
	{"level":"warn","ts":"2025-12-19T02:35:04.448024Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:35:04.448091Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T02:35:04.448099Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:35:04.448111Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-19T02:35:04.448123Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-19T02:35:04.448208Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.97:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:35:04.448225Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.97:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T02:35:04.448230Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.97:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:35:04.450532Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"error","ts":"2025-12-19T02:35:04.450598Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.97:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:35:04.450618Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2025-12-19T02:35:04.450623Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-199791","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"]}
	
	
	==> kernel <==
	 02:46:38 up 13 min,  0 users,  load average: 0.27, 0.41, 0.30
	Linux functional-199791 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [46f0d2f8a60f0dbd158a46e846583eb03edc2033ef005567b93ea2262aa4fa05] <==
	I1219 02:35:51.927006       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.100.17.50"}
	I1219 02:35:51.931223       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.108.89.26"}
	I1219 02:35:51.949104       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.101.76.242"}
	I1219 02:35:51.953340       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.36.176"}
	W1219 02:35:55.576924       1 logging.go:55] [core] [Channel #262 SubChannel #263]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.595559       1 logging.go:55] [core] [Channel #266 SubChannel #267]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.619008       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.640174       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.661743       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.689078       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.698101       1 logging.go:55] [core] [Channel #286 SubChannel #287]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.714985       1 logging.go:55] [core] [Channel #290 SubChannel #291]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.726744       1 logging.go:55] [core] [Channel #294 SubChannel #295]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 02:35:55.742735       1 logging.go:55] [core] [Channel #298 SubChannel #299]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.753287       1 logging.go:55] [core] [Channel #302 SubChannel #303]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:35:55.765961       1 logging.go:55] [core] [Channel #306 SubChannel #307]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1219 02:36:36.812853       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.160.98"}
	E1219 02:39:44.126008       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:48230: use of closed network connection
	E1219 02:39:51.606508       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:53096: use of closed network connection
	I1219 02:39:51.881111       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.103.219"}
	E1219 02:44:09.160613       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:47456: use of closed network connection
	E1219 02:44:10.819249       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:41376: use of closed network connection
	E1219 02:44:13.140731       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:41394: use of closed network connection
	E1219 02:44:16.195437       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:41406: use of closed network connection
	I1219 02:45:22.217530       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [eb15be933a8593073b861a6d6cdf247d04730954732953f38f8926b908ce48ad] <==
	I1219 02:34:42.166042       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1219 02:34:42.169402       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1219 02:34:42.171680       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1219 02:34:42.171700       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1219 02:34:42.171764       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1219 02:34:42.171874       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-199791"
	I1219 02:34:42.171917       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1219 02:34:42.175086       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 02:34:42.178845       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1219 02:34:42.182126       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1219 02:34:42.186364       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 02:34:42.187534       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1219 02:34:42.193743       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1219 02:34:42.198169       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1219 02:34:42.201146       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1219 02:34:42.201180       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1219 02:34:42.201488       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 02:34:42.201543       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1219 02:34:42.201917       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 02:34:42.212299       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1219 02:34:42.212376       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 02:34:42.212441       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 02:34:42.212461       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1219 02:34:42.217978       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 02:34:42.220151       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-controller-manager [fa115163640dc4719c1124afeb14c79fa01b41acf029663ee8bb4e570e78c375] <==
	I1219 02:35:25.582388       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1219 02:35:25.582435       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1219 02:35:25.582454       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1219 02:35:25.582459       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1219 02:35:25.584454       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1219 02:35:25.592770       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 02:35:25.603743       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 02:35:25.603768       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 02:35:25.603774       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1219 02:35:25.604819       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1219 02:35:25.605873       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:35:25.608098       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 02:35:55.566516       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongconsumers.configuration.konghq.com"
	I1219 02:35:55.566569       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongcustomentities.configuration.konghq.com"
	I1219 02:35:55.566600       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="tcpingresses.configuration.konghq.com"
	I1219 02:35:55.566618       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="udpingresses.configuration.konghq.com"
	I1219 02:35:55.566633       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongupstreampolicies.configuration.konghq.com"
	I1219 02:35:55.566657       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongingresses.configuration.konghq.com"
	I1219 02:35:55.566681       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongplugins.configuration.konghq.com"
	I1219 02:35:55.566700       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingressclassparameterses.configuration.konghq.com"
	I1219 02:35:55.566725       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongconsumergroups.configuration.konghq.com"
	I1219 02:35:55.567019       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1219 02:35:55.609293       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1219 02:35:56.768760       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:35:56.810659       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [14dd47506e6db4c1c5bf0105d048007e7b6fce1395f5974e14bbc518c4faf04a] <==
	I1219 02:35:23.503232       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 02:35:23.603506       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 02:35:23.603535       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.97"]
	E1219 02:35:23.603650       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:35:23.635171       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 02:35:23.635275       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 02:35:23.635368       1 server_linux.go:132] "Using iptables Proxier"
	I1219 02:35:23.643469       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:35:23.643693       1 server.go:527] "Version info" version="v1.34.3"
	I1219 02:35:23.643719       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:35:23.647725       1 config.go:200] "Starting service config controller"
	I1219 02:35:23.647766       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:35:23.647778       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:35:23.647870       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:35:23.647909       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:35:23.647914       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:35:23.650778       1 config.go:309] "Starting node config controller"
	I1219 02:35:23.650960       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:35:23.651026       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:35:23.748896       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:35:23.748934       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:35:23.748950       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [58657d9eea63330c183895d19b56ddc23095634d978e03443079491a1639f28b] <==
	I1219 02:34:39.537664       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 02:34:39.638974       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 02:34:39.639014       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.97"]
	E1219 02:34:39.639074       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:34:39.669314       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 02:34:39.669367       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 02:34:39.669389       1 server_linux.go:132] "Using iptables Proxier"
	I1219 02:34:39.677347       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:34:39.677560       1 server.go:527] "Version info" version="v1.34.3"
	I1219 02:34:39.677584       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:34:39.681637       1 config.go:200] "Starting service config controller"
	I1219 02:34:39.681677       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:34:39.681702       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:34:39.681717       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:34:39.681743       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:34:39.681757       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:34:39.683323       1 config.go:309] "Starting node config controller"
	I1219 02:34:39.683545       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:34:39.683568       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:34:39.781988       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:34:39.782036       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:34:39.782051       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bee52ab239e495b06e0329d9e7d462a9622090d0b3752eeed407bdedbf468603] <==
	I1219 02:34:36.594680       1 serving.go:386] Generated self-signed cert in-memory
	W1219 02:34:38.741608       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 02:34:38.741671       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 02:34:38.741693       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 02:34:38.741710       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 02:34:38.844049       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 02:34:38.847658       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:34:38.851579       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 02:34:38.852772       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 02:34:38.852857       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:34:38.867019       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:34:38.967418       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:35:04.388266       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1219 02:35:04.388679       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1219 02:35:04.388863       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1219 02:35:04.389100       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:35:04.389647       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1219 02:35:04.389748       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d903204735fb13ce21d9062ff0eb732ebe095fac0f7280517e6dd00c7ef9bd4e] <==
	I1219 02:35:21.360923       1 serving.go:386] Generated self-signed cert in-memory
	I1219 02:35:23.329893       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 02:35:23.330954       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:35:23.339922       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1219 02:35:23.340017       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1219 02:35:23.340119       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:35:23.340144       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:35:23.340167       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 02:35:23.340191       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 02:35:23.341060       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 02:35:23.341399       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 02:35:23.441256       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1219 02:35:23.441470       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:35:23.441501       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 19 02:45:52 functional-199791 kubelet[6110]: E1219 02:45:52.625742    6110 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard-auth:1.4.0"
	Dec 19 02:45:52 functional-199791 kubelet[6110]: E1219 02:45:52.626110    6110 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard-auth start failed in pod kubernetes-dashboard-auth-59779df8d5-n7zvj_kubernetes-dashboard(1b3ebf3f-5063-42ed-a19e-e3e50a010e9f): ErrImagePull: reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 19 02:45:52 functional-199791 kubelet[6110]: E1219 02:45:52.626147    6110 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ErrImagePull: \"reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-59779df8d5-n7zvj" podUID="1b3ebf3f-5063-42ed-a19e-e3e50a010e9f"
	Dec 19 02:45:58 functional-199791 kubelet[6110]: E1219 02:45:58.957221    6110 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766112358956932317  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243802}  inodes_used:{value:113}}"
	Dec 19 02:45:58 functional-199791 kubelet[6110]: E1219 02:45:58.957243    6110 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766112358956932317  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243802}  inodes_used:{value:113}}"
	Dec 19 02:46:05 functional-199791 kubelet[6110]: E1219 02:46:05.659150    6110 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-59779df8d5-n7zvj" podUID="1b3ebf3f-5063-42ed-a19e-e3e50a010e9f"
	Dec 19 02:46:08 functional-199791 kubelet[6110]: E1219 02:46:08.959412    6110 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766112368959084703  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243802}  inodes_used:{value:113}}"
	Dec 19 02:46:08 functional-199791 kubelet[6110]: E1219 02:46:08.959437    6110 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766112368959084703  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243802}  inodes_used:{value:113}}"
	Dec 19 02:46:17 functional-199791 kubelet[6110]: E1219 02:46:17.660072    6110 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-59779df8d5-n7zvj" podUID="1b3ebf3f-5063-42ed-a19e-e3e50a010e9f"
	Dec 19 02:46:18 functional-199791 kubelet[6110]: E1219 02:46:18.781531    6110 manager.go:1116] Failed to create existing container: /kubepods/burstable/podf0c51f768ca9cef53541023863070d9e/crio-e5bbcc1d5d42b6c926b19ae16686fd957eb33b577c94f8d99f43ba93a7ce3004: Error finding container e5bbcc1d5d42b6c926b19ae16686fd957eb33b577c94f8d99f43ba93a7ce3004: Status 404 returned error can't find the container with id e5bbcc1d5d42b6c926b19ae16686fd957eb33b577c94f8d99f43ba93a7ce3004
	Dec 19 02:46:18 functional-199791 kubelet[6110]: E1219 02:46:18.782255    6110 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd3a0f0e2-fe99-419e-a874-319cfe3e8dd7/crio-88a84164bc70bcb85b52eba377e96b2b29ec6ddf471223d4156de7b0eede6597: Error finding container 88a84164bc70bcb85b52eba377e96b2b29ec6ddf471223d4156de7b0eede6597: Status 404 returned error can't find the container with id 88a84164bc70bcb85b52eba377e96b2b29ec6ddf471223d4156de7b0eede6597
	Dec 19 02:46:18 functional-199791 kubelet[6110]: E1219 02:46:18.782607    6110 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6d29b63d26a2dfe9948639b49d1769e8/crio-19e4328300f57681fd9b1ecb6db8edf2f79368681bf85ed161a36a6261a1ad81: Error finding container 19e4328300f57681fd9b1ecb6db8edf2f79368681bf85ed161a36a6261a1ad81: Status 404 returned error can't find the container with id 19e4328300f57681fd9b1ecb6db8edf2f79368681bf85ed161a36a6261a1ad81
	Dec 19 02:46:18 functional-199791 kubelet[6110]: E1219 02:46:18.783032    6110 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod009e7ad8-75b8-4205-91aa-980d65bb83a4/crio-e1d059d70d36e2c4c7d5370b5977a7ba91f8fbe8b2bb4509c589ebc9050e747d: Error finding container e1d059d70d36e2c4c7d5370b5977a7ba91f8fbe8b2bb4509c589ebc9050e747d: Status 404 returned error can't find the container with id e1d059d70d36e2c4c7d5370b5977a7ba91f8fbe8b2bb4509c589ebc9050e747d
	Dec 19 02:46:18 functional-199791 kubelet[6110]: E1219 02:46:18.783255    6110 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod7e3d43cf11f27df64ddec0bd25dc66e3/crio-8d97915e3f5ac04d97267d1cfa0098c5318fd7389140d8cc4902a2205187ba66: Error finding container 8d97915e3f5ac04d97267d1cfa0098c5318fd7389140d8cc4902a2205187ba66: Status 404 returned error can't find the container with id 8d97915e3f5ac04d97267d1cfa0098c5318fd7389140d8cc4902a2205187ba66
	Dec 19 02:46:18 functional-199791 kubelet[6110]: E1219 02:46:18.783452    6110 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podd29d4a03-e8eb-4d11-bafc-d47ea5ede72e/crio-654c1b461cf348f51f9f5081ee7d0a33524193a5db776507a0fa699a128fc2d4: Error finding container 654c1b461cf348f51f9f5081ee7d0a33524193a5db776507a0fa699a128fc2d4: Status 404 returned error can't find the container with id 654c1b461cf348f51f9f5081ee7d0a33524193a5db776507a0fa699a128fc2d4
	Dec 19 02:46:18 functional-199791 kubelet[6110]: E1219 02:46:18.961328    6110 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766112378961091949  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243802}  inodes_used:{value:113}}"
	Dec 19 02:46:18 functional-199791 kubelet[6110]: E1219 02:46:18.961347    6110 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766112378961091949  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243802}  inodes_used:{value:113}}"
	Dec 19 02:46:24 functional-199791 kubelet[6110]: E1219 02:46:24.285545    6110 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2"
	Dec 19 02:46:24 functional-199791 kubelet[6110]: E1219 02:46:24.285590    6110 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2"
	Dec 19 02:46:24 functional-199791 kubelet[6110]: E1219 02:46:24.285913    6110 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard-metrics-scraper start failed in pod kubernetes-dashboard-metrics-scraper-7685fd8b77-46x9w_kubernetes-dashboard(28e86966-9197-4aa8-b20a-9c3c2361adc6): ErrImagePull: reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 19 02:46:24 functional-199791 kubelet[6110]: E1219 02:46:24.285953    6110 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-46x9w" podUID="28e86966-9197-4aa8-b20a-9c3c2361adc6"
	Dec 19 02:46:28 functional-199791 kubelet[6110]: E1219 02:46:28.963278    6110 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766112388962481744  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243802}  inodes_used:{value:113}}"
	Dec 19 02:46:28 functional-199791 kubelet[6110]: E1219 02:46:28.963302    6110 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766112388962481744  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243802}  inodes_used:{value:113}}"
	Dec 19 02:46:29 functional-199791 kubelet[6110]: E1219 02:46:29.659059    6110 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-59779df8d5-n7zvj" podUID="1b3ebf3f-5063-42ed-a19e-e3e50a010e9f"
	Dec 19 02:46:37 functional-199791 kubelet[6110]: E1219 02:46:37.657930    6110 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-46x9w" podUID="28e86966-9197-4aa8-b20a-9c3c2361adc6"
	
	
	==> storage-provisioner [3075784cb0c1efff3c934fed9d726e8df01803e52a8cfe91ebf520417d3dfc95] <==
	I1219 02:34:39.397906       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 02:34:39.414358       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 02:34:39.414933       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1219 02:34:39.418048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:42.877145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:47.137842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:50.736020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:53.790160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:56.812611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:56.817421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 02:34:56.817536       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 02:34:56.817676       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-199791_d10e771c-be38-4def-81f0-9169b6a9faaa!
	I1219 02:34:56.818428       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"666199d8-8509-44d4-8b92-058a8c4f7820", APIVersion:"v1", ResourceVersion:"533", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-199791_d10e771c-be38-4def-81f0-9169b6a9faaa became leader
	W1219 02:34:56.820032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:56.828175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 02:34:56.918469       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-199791_d10e771c-be38-4def-81f0-9169b6a9faaa!
	W1219 02:34:58.831310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:58.838546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:00.843150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:00.851301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:02.855336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:02.860314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [4e4ecc916e87a11785556ec14c15e599641a328ac5360ee36fa978933c69d4a5] <==
	W1219 02:46:13.631857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:15.635199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:15.640217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:17.643123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:17.647530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:19.651385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:19.656011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:21.659403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:21.667711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:23.670777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:23.674964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:25.678507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:25.683381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:27.686463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:27.693955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:29.696771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:29.704176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:31.706970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:31.711680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:33.714908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:33.719419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:35.722319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:35.727677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:37.736506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:46:37.744311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-199791 -n functional-199791
helpers_test.go:270: (dbg) Run:  kubectl --context functional-199791 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-75c85bcc94-7lwrx hello-node-connect-7d85dfc575-2gcm4 kubernetes-dashboard-api-55487dd988-m95fb kubernetes-dashboard-auth-59779df8d5-n7zvj kubernetes-dashboard-kong-9849c64bd-shqrq kubernetes-dashboard-metrics-scraper-7685fd8b77-46x9w kubernetes-dashboard-web-5c9f966b98-jmj2w
helpers_test.go:283: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-199791 describe pod busybox-mount hello-node-75c85bcc94-7lwrx hello-node-connect-7d85dfc575-2gcm4 kubernetes-dashboard-api-55487dd988-m95fb kubernetes-dashboard-auth-59779df8d5-n7zvj kubernetes-dashboard-kong-9849c64bd-shqrq kubernetes-dashboard-metrics-scraper-7685fd8b77-46x9w kubernetes-dashboard-web-5c9f966b98-jmj2w
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-199791 describe pod busybox-mount hello-node-75c85bcc94-7lwrx hello-node-connect-7d85dfc575-2gcm4 kubernetes-dashboard-api-55487dd988-m95fb kubernetes-dashboard-auth-59779df8d5-n7zvj kubernetes-dashboard-kong-9849c64bd-shqrq kubernetes-dashboard-metrics-scraper-7685fd8b77-46x9w kubernetes-dashboard-web-5c9f966b98-jmj2w: exit status 1 (92.908738ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199791/192.168.39.97
	Start Time:       Fri, 19 Dec 2025 02:35:47 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://7a3c52996d6ef4428177d121063d57150d5c4de8dc99d3c67aa49e6528059cc0
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 19 Dec 2025 02:36:21 +0000
	      Finished:     Fri, 19 Dec 2025 02:36:21 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wp9z4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wp9z4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-199791
	  Normal  Pulling    10m   kubelet            spec.containers{mount-munger}: Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            spec.containers{mount-munger}: Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.19s (33.462s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            spec.containers{mount-munger}: Created container: mount-munger
	  Normal  Started    10m   kubelet            spec.containers{mount-munger}: Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-7lwrx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199791/192.168.39.97
	Start Time:       Fri, 19 Dec 2025 02:35:45 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tx7cl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tx7cl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7lwrx to functional-199791
	  Warning  Failed     10m                   kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m5s (x3 over 10m)    kubelet            spec.containers{echo-server}: Error: ErrImagePull
	  Warning  Failed     2m5s (x2 over 6m31s)  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    98s (x4 over 10m)     kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     98s (x4 over 10m)     kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
	  Normal   Pulling    84s (x4 over 10m)     kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-2gcm4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199791/192.168.39.97
	Start Time:       Fri, 19 Dec 2025 02:36:36 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.15
	IPs:
	  IP:           10.244.0.15
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v8jgj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-v8jgj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-2gcm4 to functional-199791
	  Warning  Failed     5m59s                kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     79s (x2 over 5m59s)  kubelet            spec.containers{echo-server}: Error: ErrImagePull
	  Warning  Failed     79s                  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    67s (x2 over 5m59s)  kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     67s (x2 over 5m59s)  kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
	  Normal   Pulling    55s (x3 over 10m)    kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-api-55487dd988-m95fb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-auth-59779df8d5-n7zvj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-kong-9849c64bd-shqrq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-7685fd8b77-46x9w" not found
	Error from server (NotFound): pods "kubernetes-dashboard-web-5c9f966b98-jmj2w" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-199791 describe pod busybox-mount hello-node-75c85bcc94-7lwrx hello-node-connect-7d85dfc575-2gcm4 kubernetes-dashboard-api-55487dd988-m95fb kubernetes-dashboard-auth-59779df8d5-n7zvj kubernetes-dashboard-kong-9849c64bd-shqrq kubernetes-dashboard-metrics-scraper-7685fd8b77-46x9w kubernetes-dashboard-web-5c9f966b98-jmj2w: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-199791 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-199791 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-7lwrx" [96192f1e-8144-4970-a848-961ca9d6a26b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:338: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-199791 -n functional-199791
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-19 02:45:45.589007869 +0000 UTC m=+1248.627187179
functional_test.go:1460: (dbg) Run:  kubectl --context functional-199791 describe po hello-node-75c85bcc94-7lwrx -n default
functional_test.go:1460: (dbg) kubectl --context functional-199791 describe po hello-node-75c85bcc94-7lwrx -n default:
Name:             hello-node-75c85bcc94-7lwrx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-199791/192.168.39.97
Start Time:       Fri, 19 Dec 2025 02:35:45 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tx7cl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tx7cl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7lwrx to functional-199791
Warning  Failed     9m27s                kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     71s (x3 over 9m27s)  kubelet            spec.containers{echo-server}: Error: ErrImagePull
Warning  Failed     71s (x2 over 5m37s)  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    44s (x4 over 9m26s)  kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
Warning  Failed     44s (x4 over 9m26s)  kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
Normal   Pulling    30s (x4 over 10m)    kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-199791 logs hello-node-75c85bcc94-7lwrx -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-199791 logs hello-node-75c85bcc94-7lwrx -n default: exit status 1 (66.559019ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-7lwrx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-199791 logs hello-node-75c85bcc94-7lwrx -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199791 service --namespace=default --https --url hello-node: exit status 115 (231.285514ms)

                                                
                                                
-- stdout --
	https://192.168.39.97:32348
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-199791 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199791 service hello-node --url --format={{.IP}}: exit status 115 (237.41602ms)

                                                
                                                
-- stdout --
	192.168.39.97
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-199791 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199791 service hello-node --url: exit status 115 (231.417662ms)

                                                
                                                
-- stdout --
	http://192.168.39.97:32348
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-199791 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.97:32348
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (301.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-936345 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-936345 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-936345 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-936345 --alsologtostderr -v=1] stderr:
I1219 02:49:20.716496   19667 out.go:360] Setting OutFile to fd 1 ...
I1219 02:49:20.716628   19667 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:49:20.716643   19667 out.go:374] Setting ErrFile to fd 2...
I1219 02:49:20.716646   19667 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:49:20.716821   19667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
I1219 02:49:20.717039   19667 mustload.go:66] Loading cluster: functional-936345
I1219 02:49:20.718044   19667 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:49:20.720130   19667 host.go:66] Checking if "functional-936345" exists ...
I1219 02:49:20.720301   19667 api_server.go:166] Checking apiserver status ...
I1219 02:49:20.720336   19667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1219 02:49:20.722087   19667 main.go:144] libmachine: domain functional-936345 has defined MAC address 52:54:00:63:7d:56 in network mk-functional-936345
I1219 02:49:20.722414   19667 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:63:7d:56", ip: ""} in network mk-functional-936345: {Iface:virbr1 ExpiryTime:2025-12-19 03:46:54 +0000 UTC Type:0 Mac:52:54:00:63:7d:56 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:functional-936345 Clientid:01:52:54:00:63:7d:56}
I1219 02:49:20.722437   19667 main.go:144] libmachine: domain functional-936345 has defined IP address 192.168.39.80 and MAC address 52:54:00:63:7d:56 in network mk-functional-936345
I1219 02:49:20.722543   19667 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/functional-936345/id_rsa Username:docker}
I1219 02:49:20.819385   19667 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5866/cgroup
W1219 02:49:20.833343   19667 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5866/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1219 02:49:20.833413   19667 ssh_runner.go:195] Run: ls
I1219 02:49:20.838193   19667 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8441/healthz ...
I1219 02:49:20.843034   19667 api_server.go:279] https://192.168.39.80:8441/healthz returned 200:
ok
W1219 02:49:20.843070   19667 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1219 02:49:20.843201   19667 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:49:20.843217   19667 addons.go:70] Setting dashboard=true in profile "functional-936345"
I1219 02:49:20.843224   19667 addons.go:239] Setting addon dashboard=true in "functional-936345"
I1219 02:49:20.843242   19667 host.go:66] Checking if "functional-936345" exists ...
I1219 02:49:20.844755   19667 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:49:20.844769   19667 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
I1219 02:49:20.846974   19667 main.go:144] libmachine: domain functional-936345 has defined MAC address 52:54:00:63:7d:56 in network mk-functional-936345
I1219 02:49:20.847374   19667 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:63:7d:56", ip: ""} in network mk-functional-936345: {Iface:virbr1 ExpiryTime:2025-12-19 03:46:54 +0000 UTC Type:0 Mac:52:54:00:63:7d:56 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:functional-936345 Clientid:01:52:54:00:63:7d:56}
I1219 02:49:20.847401   19667 main.go:144] libmachine: domain functional-936345 has defined IP address 192.168.39.80 and MAC address 52:54:00:63:7d:56 in network mk-functional-936345
I1219 02:49:20.847538   19667 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/functional-936345/id_rsa Username:docker}
I1219 02:49:20.945830   19667 ssh_runner.go:195] Run: test -f /usr/bin/helm
I1219 02:49:20.948527   19667 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
I1219 02:49:20.951158   19667 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
I1219 02:49:21.734104   19667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
I1219 02:49:25.012093   19667 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.277935483s)
I1219 02:49:25.012186   19667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:49:25.313490   19667 addons.go:500] Verifying addon dashboard=true in "functional-936345"
I1219 02:49:25.316535   19667 out.go:179] * Verifying dashboard addon...
I1219 02:49:25.318309   19667 kapi.go:59] client config for functional-936345: &rest.Config{Host:"https://192.168.39.80:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.key", CAFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:49:25.318925   19667 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1219 02:49:25.318949   19667 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1219 02:49:25.318956   19667 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1219 02:49:25.318964   19667 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1219 02:49:25.318972   19667 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1219 02:49:25.319394   19667 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
I1219 02:49:25.346132   19667 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
I1219 02:49:25.346159   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:25.825775   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:26.323923   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:26.823721   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:27.325078   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:27.824116   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:28.323903   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:28.824460   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:29.322651   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:29.824281   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:30.325462   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:30.824370   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:31.322369   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:31.824341   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:32.323210   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:32.824407   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:33.322807   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:33.827075   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:34.322813   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:34.824136   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:35.322404   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:35.824189   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:36.322557   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:36.825983   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:37.322298   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:37.825139   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:38.322726   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:38.823767   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:39.323417   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:39.823743   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:40.323824   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:40.824690   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:41.322963   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:41.823975   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:42.322836   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:42.824170   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:43.322718   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:43.827445   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:44.323561   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:44.822664   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:45.323007   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:45.823392   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:46.322991   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:46.825690   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:47.323132   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:47.825751   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:48.323758   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:48.824189   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:49.322832   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:49.825024   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:50.323260   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:50.825374   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:51.322582   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:51.825139   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:52.323091   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:52.824094   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:53.322323   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:53.825390   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:54.323207   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:54.825480   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:55.323899   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:55.825184   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:56.323082   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:56.822258   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:57.322543   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:57.822490   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:58.323346   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:58.823666   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:59.323248   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:49:59.824946   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:00.322992   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:00.822015   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:01.322739   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:01.825618   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:02.323586   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:02.825151   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:03.322405   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:03.824383   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:04.323299   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:04.822266   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:05.322623   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:05.825507   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:06.323989   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:06.822225   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:07.323120   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:07.821912   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:08.322688   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:08.825670   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:09.322708   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:09.826276   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:10.323003   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:10.822356   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:11.322722   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:11.826278   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:12.322784   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:12.825615   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:13.323552   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:13.825511   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:14.324390   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:14.825903   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:15.323317   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:15.826591   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:16.323560   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:16.824807   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:17.322242   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:17.824411   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:18.323801   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:18.824109   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:19.322896   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:19.824589   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:20.323584   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:20.825346   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:21.322990   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:21.823553   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:22.324010   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:22.824524   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:23.324322   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:23.823079   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:24.323305   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:24.824846   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:25.323640   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:25.824737   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:26.324793   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:26.825587   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:27.323098   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:27.822994   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:28.322950   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:28.827226   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:29.323012   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:29.824173   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:30.322748   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:30.825044   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:31.322539   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:31.824423   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:32.323150   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:32.824557   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:33.323866   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:33.825457   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:34.323293   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:34.822990   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:35.323162   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:35.824346   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:36.322849   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:36.824136   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:37.324160   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:37.824284   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:38.330888   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:38.822196   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:39.323366   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:39.825469   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:40.323025   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:40.822898   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:41.322433   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:41.824899   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:42.323149   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:42.824114   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:43.322394   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:43.825028   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:44.322888   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:44.822520   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:45.322914   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:45.822683   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:46.323041   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:46.822626   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:47.323166   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:47.824344   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:48.327243   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:48.825173   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:49.322498   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:49.823585   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:50.322893   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:50.824145   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:51.322897   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:51.822870   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:52.323967   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:52.825107   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:53.322617   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:53.823084   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:54.322788   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:54.823269   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:55.322983   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:55.822263   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:56.322878   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:56.823754   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:57.323411   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:57.823286   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:58.323069   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:58.823584   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:59.322920   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:50:59.823363   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:00.323014   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:00.823169   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:01.322260   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:01.822958   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:02.322738   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:02.824591   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:03.323022   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:03.823274   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:04.323083   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:04.823472   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:05.322917   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:05.823518   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:06.322991   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:06.823132   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:07.322318   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:07.824181   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:08.323134   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:08.823418   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:09.323358   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:09.823599   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:10.323367   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:10.822863   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:11.322724   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:11.822760   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:12.322172   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:12.824280   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:13.323207   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:13.823640   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:14.323106   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:14.823876   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:15.322984   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:15.824488   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:16.322646   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:16.824263   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:17.322636   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:17.827902   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:18.324089   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:18.822801   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:19.323678   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:19.823298   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:20.323087   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:20.827305   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:21.322788   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:21.823918   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:22.322478   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:22.823150   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:23.322540   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:23.823841   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:24.322374   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:24.825053   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:25.322619   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:25.824212   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:26.323072   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:26.824562   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:27.323779   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:27.823827   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:28.322942   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:28.824635   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:29.323776   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:29.823128   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:30.323324   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:30.825154   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:31.322675   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:31.824329   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:32.322921   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:32.823355   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:33.322743   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:33.824322   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:34.322701   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:34.824282   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:35.322827   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:35.822731   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:36.323901   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:36.822503   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:37.323330   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:37.823224   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:38.324300   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:38.822564   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:39.323143   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:39.823417   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:40.323994   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:40.824715   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:41.323994   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:41.824425   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:42.323010   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:42.824013   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:43.322287   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:43.823198   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:44.322454   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:44.824076   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:45.322297   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:45.824537   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:46.323276   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:46.823055   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:47.322294   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:47.824104   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:48.322963   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:48.822558   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:49.322887   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:49.823813   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:50.323533   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:50.823598   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:51.323656   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:51.823238   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:52.322767   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:52.824530   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:53.322757   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:53.823466   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:54.323233   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:54.826529   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:55.323068   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:55.824424   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:56.322564   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:56.825186   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:57.322792   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:57.825453   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:58.323731   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:58.824735   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:59.323549   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:51:59.823722   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:00.323174   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:00.825907   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:01.322243   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:01.825731   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:02.323446   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:02.824536   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:03.323344   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:03.824129   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:04.322307   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:04.824529   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:05.322958   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:05.824315   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:06.322937   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:06.824463   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:07.323231   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:07.824724   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:08.323719   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:08.822894   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:09.322347   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:09.823684   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:10.322874   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:10.825820   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:11.322163   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:11.824258   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:12.322751   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:12.823503   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:13.323717   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:13.824019   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:14.322813   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:14.824747   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:15.323480   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:15.824892   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:16.322144   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:16.823204   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:17.322337   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:17.823926   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:18.322922   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:18.824922   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:19.322621   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:19.824142   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:20.322343   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:20.826149   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:21.323329   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:21.825054   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:22.322661   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:22.824437   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:23.323129   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:23.822848   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:24.322071   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:24.822006   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:25.322776   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:25.824481   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:26.322622   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:26.825624   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:27.323295   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:27.825036   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:28.322789   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:28.823068   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:29.322459   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:29.824092   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:30.322096   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:30.824946   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:31.323254   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:31.824691   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:32.323548   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:32.824338   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:33.323561   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:33.822612   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:34.323088   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:34.822483   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:35.322864   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:35.822741   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:36.323531   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:36.826936   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:37.322625   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:37.823129   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:38.322797   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:38.826294   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:39.322851   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:39.823451   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:40.322885   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:40.822732   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:41.323506   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:41.825566   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:42.323615   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:42.824842   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:43.323217   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:43.824704   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:44.323319   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:44.823022   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:45.322725   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:45.824978   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:46.322042   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:46.822228   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:47.323018   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:47.822162   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:52:48.322873   19667 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-936345 -n functional-936345
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-936345 logs -n 25: (1.195390196s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount   │ -p functional-936345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2523260388/001:/mount2 --alsologtostderr -v=1                         │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │                     │
	│ mount   │ -p functional-936345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2523260388/001:/mount1 --alsologtostderr -v=1                         │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │                     │
	│ ssh     │ functional-936345 ssh findmnt -T /mount1                                                                                                                     │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │                     │
	│ ssh     │ functional-936345 ssh findmnt -T /mount1                                                                                                                     │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ ssh     │ functional-936345 ssh findmnt -T /mount2                                                                                                                     │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ ssh     │ functional-936345 ssh findmnt -T /mount3                                                                                                                     │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ mount   │ -p functional-936345 --kill=true                                                                                                                             │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │                     │
	│ image   │ functional-936345 image load --daemon kicbase/echo-server:functional-936345 --alsologtostderr                                                                │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ image   │ functional-936345 image ls                                                                                                                                   │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ image   │ functional-936345 image load --daemon kicbase/echo-server:functional-936345 --alsologtostderr                                                                │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ image   │ functional-936345 image ls                                                                                                                                   │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ image   │ functional-936345 image load --daemon kicbase/echo-server:functional-936345 --alsologtostderr                                                                │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ image   │ functional-936345 image ls                                                                                                                                   │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ image   │ functional-936345 image save kicbase/echo-server:functional-936345 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ image   │ functional-936345 image rm kicbase/echo-server:functional-936345 --alsologtostderr                                                                           │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ image   │ functional-936345 image ls                                                                                                                                   │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ image   │ functional-936345 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ image   │ functional-936345 image ls                                                                                                                                   │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ image   │ functional-936345 image save --daemon kicbase/echo-server:functional-936345 --alsologtostderr                                                                │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ license │                                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ ssh     │ functional-936345 ssh sudo systemctl is-active docker                                                                                                        │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │                     │
	│ ssh     │ functional-936345 ssh sudo systemctl is-active containerd                                                                                                    │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │                     │
	│ ssh     │ functional-936345 ssh echo hello                                                                                                                             │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ ssh     │ functional-936345 ssh cat /etc/hostname                                                                                                                      │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ ssh     │ functional-936345 ssh sudo cat /etc/test/nested/copy/8937/hosts                                                                                              │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:49:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:49:20.613050   19647 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:49:20.613357   19647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:49:20.613374   19647 out.go:374] Setting ErrFile to fd 2...
	I1219 02:49:20.613381   19647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:49:20.613718   19647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 02:49:20.614304   19647 out.go:368] Setting JSON to false
	I1219 02:49:20.615290   19647 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1905,"bootTime":1766110656,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:49:20.615343   19647 start.go:143] virtualization: kvm guest
	I1219 02:49:20.617630   19647 out.go:179] * [functional-936345] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:49:20.618815   19647 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:49:20.618812   19647 notify.go:221] Checking for updates...
	I1219 02:49:20.621213   19647 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:49:20.622209   19647 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 02:49:20.623046   19647 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 02:49:20.624030   19647 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:49:20.624937   19647 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:49:20.626555   19647 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 02:49:20.627264   19647 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:49:20.657353   19647 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 02:49:20.658439   19647 start.go:309] selected driver: kvm2
	I1219 02:49:20.658450   19647 start.go:928] validating driver "kvm2" against &{Name:functional-936345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-936345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:49:20.658547   19647 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:49:20.659392   19647 cni.go:84] Creating CNI manager for ""
	I1219 02:49:20.659478   19647 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 02:49:20.659525   19647 start.go:353] cluster config:
	{Name:functional-936345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-936345 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:49:20.660700   19647 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.359977362Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766112861359955488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:174538,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18848ffc-024f-46cb-9674-b92b6563a0c7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.360775369Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a63120c2-206e-4940-ac98-5e01a4e02c9b name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.360893660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a63120c2-206e-4940-ac98-5e01a4e02c9b name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.361537841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e1a5f3eb5f526298bd503d941552876969e9e0556abcf372f10b81ea35a6035,PodSandboxId:12bacd69cf5382b27b9a90d143232090d7833ad686639b002af88757f19da5f4,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766112626299280408,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cbe8a0dd-5db0-4c8a-a076-c2b0c28c1f82,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91e758f1aacca078e5d5e4ade2daaf122c267a8da1977c176e140be8c0f6b5b8,PodSandboxId:e918007929d35b5cb8e3dc0752427803b8496d458837ae4795f9a999b104b7c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766112538553240558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vbbbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6293c621-6771-4f49-82da-b9855c06b18c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41e37e14f858d2395248ad7cc9a1110682e9292d64ed829cdfdb3235af49eb4,PodSandboxId:a92b49d24460aa320c25e155d6caf6b08b0235c1544469c7b8221a11f487c3b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766112538232052310,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfp8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5822977e-9a2f-4ca6-8cb6-26f161c23791,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b9d4f0960c05f3f429e70c4f2fe9f0011729c50b33da6bd7b3b81b3de517ab3,PodSandboxId:70ef0261fbcedcbc5ae3230cf923a4dfffb20600a7e4151d688f1fb654ccf238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766112538192380480,Labels:map[string]string{io.kubernetes.
container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd92a8c6-e659-4184-bcbd-43da477075c7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6831dc2bcef8d4b1a94250aed36dd9fb44a05ea80016e3768e48123a8e1f6c,PodSandboxId:35cb157095476b02b0abeaf08d488668f96c9e0d4728acd3994b964c3700b468,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766112534452951956,Labels:map[string]string{io.kubernetes.container.name: k
ube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b4be683f1b2146cda99455d27828d0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7a42f7becb9ee6e394e4711ce0d939c670b9b86111d9eacaf91742d4b61876,PodSandboxId:d02d27fb02ddf4040b24d90914f56b9a7e2f13861d573d1b8756bef024538549,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,
State:CONTAINER_RUNNING,CreatedAt:1766112534422616995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 711dafb450914341c18eb7924278c787,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ae4a05d12afd9d8d285abf18d51893a511feb65943a48d1182033fbca6b790,PodSandboxId:e27f37b5433ef140fc0770f3aa8fad8c203db6e4c12219c4ecf0d9f7e70d6d3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766112534406923517,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c78855ae3727af10a88609c41e957243,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae201f753e2fd25d72bcc3e9cb3882717e0a242162a84fba4346388878190fb2,PodSandboxId:f9bc0a0f70474fa6cc1046f5edb3fb37d910306f5e376c522bec7f7178b89806,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9
bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766112534388772204,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24264402a1697b7f5edbdf8e719b1ae,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7791700eee60709360756b6457c1f39022bd70f852043f8fc410912b7eac51,PodSandboxId:96419d7b2a3c490726e8415d3406d3339f
f543893b9fcf82f96df0c362123b09,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1766112496013321509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vbbbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6293c621-6771-4f49-82da-b9855c06b18c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\
"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6d164603577eba196a03571377b050885615eb2c1f5bc8cd2b3648f7365509,PodSandboxId:9589988aeaee6b32374ef45e84566e546064d358ea05730fe94ccb2632816218,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1766112495721931029,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfp8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5822977e-9a2f-4ca6-8cb6-26f161c23791,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c562b08e9b8503c78d017881ca66b665a54239ea96beb94ed77195ee8e7043,PodSandboxId:ace9301814f0ffaaec2e3bacca3662ed1afc226f3f5ce4e68dcb977e524983ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766112495709625548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd92a8c6-e659-4184-bcbd-43da477075c7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbc3f93240053c62c6dd367d02faccdf3daa42629940378a9c34a13fcb58be8,PodSandboxId:c6fec3d2091daa0779891e3a982fd27bf9b549e96b7343ef851e6090bd001aaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1766112491916767855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b4be683f1b2146cda99455d27828d0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",
\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30141c2d08a9d0bc12975b94953d7e149695bd82c65e81992aea6fcea340cc,PodSandboxId:3032bc4eef53c29fcaa42cae1398d811952a3c3289a6e573a2d367833dae7cd2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1766112491863558805,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24264402a1697b7f5edbdf8e
719b1ae,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baa578ff64f7266f8aec4a6a718785e4c57c078f37fb762460df66a5ae091dc9,PodSandboxId:d38d8b5ae41c4328ced9df76847a7d97a43a0851e2cc16e31fd4bc1c4e24c235,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1766112491815245134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-936345,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: c78855ae3727af10a88609c41e957243,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a63120c2-206e-4940-ac98-5e01a4e02c9b name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.405596226Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae09e2b1-7471-463f-be8a-48f103ee94e3 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.405957682Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae09e2b1-7471-463f-be8a-48f103ee94e3 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.407288046Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b53c3856-ae7c-4c09-ace5-b5bfca2065ae name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.407855505Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766112861407832012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:174538,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b53c3856-ae7c-4c09-ace5-b5bfca2065ae name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.408590605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3fdccbfe-314a-4cb6-9985-da80bed2503d name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.408659086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3fdccbfe-314a-4cb6-9985-da80bed2503d name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.408944780Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e1a5f3eb5f526298bd503d941552876969e9e0556abcf372f10b81ea35a6035,PodSandboxId:12bacd69cf5382b27b9a90d143232090d7833ad686639b002af88757f19da5f4,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766112626299280408,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cbe8a0dd-5db0-4c8a-a076-c2b0c28c1f82,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91e758f1aacca078e5d5e4ade2daaf122c267a8da1977c176e140be8c0f6b5b8,PodSandboxId:e918007929d35b5cb8e3dc0752427803b8496d458837ae4795f9a999b104b7c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766112538553240558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vbbbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6293c621-6771-4f49-82da-b9855c06b18c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41e37e14f858d2395248ad7cc9a1110682e9292d64ed829cdfdb3235af49eb4,PodSandboxId:a92b49d24460aa320c25e155d6caf6b08b0235c1544469c7b8221a11f487c3b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766112538232052310,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfp8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5822977e-9a2f-4ca6-8cb6-26f161c23791,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b9d4f0960c05f3f429e70c4f2fe9f0011729c50b33da6bd7b3b81b3de517ab3,PodSandboxId:70ef0261fbcedcbc5ae3230cf923a4dfffb20600a7e4151d688f1fb654ccf238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766112538192380480,Labels:map[string]string{io.kubernetes.
container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd92a8c6-e659-4184-bcbd-43da477075c7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6831dc2bcef8d4b1a94250aed36dd9fb44a05ea80016e3768e48123a8e1f6c,PodSandboxId:35cb157095476b02b0abeaf08d488668f96c9e0d4728acd3994b964c3700b468,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766112534452951956,Labels:map[string]string{io.kubernetes.container.name: k
ube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b4be683f1b2146cda99455d27828d0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7a42f7becb9ee6e394e4711ce0d939c670b9b86111d9eacaf91742d4b61876,PodSandboxId:d02d27fb02ddf4040b24d90914f56b9a7e2f13861d573d1b8756bef024538549,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,
State:CONTAINER_RUNNING,CreatedAt:1766112534422616995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 711dafb450914341c18eb7924278c787,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ae4a05d12afd9d8d285abf18d51893a511feb65943a48d1182033fbca6b790,PodSandboxId:e27f37b5433ef140fc0770f3aa8fad8c203db6e4c12219c4ecf0d9f7e70d6d3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766112534406923517,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c78855ae3727af10a88609c41e957243,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae201f753e2fd25d72bcc3e9cb3882717e0a242162a84fba4346388878190fb2,PodSandboxId:f9bc0a0f70474fa6cc1046f5edb3fb37d910306f5e376c522bec7f7178b89806,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9
bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766112534388772204,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24264402a1697b7f5edbdf8e719b1ae,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7791700eee60709360756b6457c1f39022bd70f852043f8fc410912b7eac51,PodSandboxId:96419d7b2a3c490726e8415d3406d3339f
f543893b9fcf82f96df0c362123b09,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1766112496013321509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vbbbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6293c621-6771-4f49-82da-b9855c06b18c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\
"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6d164603577eba196a03571377b050885615eb2c1f5bc8cd2b3648f7365509,PodSandboxId:9589988aeaee6b32374ef45e84566e546064d358ea05730fe94ccb2632816218,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1766112495721931029,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfp8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5822977e-9a2f-4ca6-8cb6-26f161c23791,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c562b08e9b8503c78d017881ca66b665a54239ea96beb94ed77195ee8e7043,PodSandboxId:ace9301814f0ffaaec2e3bacca3662ed1afc226f3f5ce4e68dcb977e524983ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766112495709625548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd92a8c6-e659-4184-bcbd-43da477075c7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbc3f93240053c62c6dd367d02faccdf3daa42629940378a9c34a13fcb58be8,PodSandboxId:c6fec3d2091daa0779891e3a982fd27bf9b549e96b7343ef851e6090bd001aaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1766112491916767855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b4be683f1b2146cda99455d27828d0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",
\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30141c2d08a9d0bc12975b94953d7e149695bd82c65e81992aea6fcea340cc,PodSandboxId:3032bc4eef53c29fcaa42cae1398d811952a3c3289a6e573a2d367833dae7cd2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1766112491863558805,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24264402a1697b7f5edbdf8e
719b1ae,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baa578ff64f7266f8aec4a6a718785e4c57c078f37fb762460df66a5ae091dc9,PodSandboxId:d38d8b5ae41c4328ced9df76847a7d97a43a0851e2cc16e31fd4bc1c4e24c235,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1766112491815245134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-936345,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: c78855ae3727af10a88609c41e957243,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3fdccbfe-314a-4cb6-9985-da80bed2503d name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.444914089Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d320428-ce6c-4b35-81fb-86c51585f773 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.445023198Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d320428-ce6c-4b35-81fb-86c51585f773 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.445914047Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f968ec25-bbfa-4459-8c87-0dda9be085f1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.446460301Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766112861446441118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:174538,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f968ec25-bbfa-4459-8c87-0dda9be085f1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.447514732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07619e23-e2d0-4d7d-8f70-a4b304bd47d1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.447594945Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07619e23-e2d0-4d7d-8f70-a4b304bd47d1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.447890323Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e1a5f3eb5f526298bd503d941552876969e9e0556abcf372f10b81ea35a6035,PodSandboxId:12bacd69cf5382b27b9a90d143232090d7833ad686639b002af88757f19da5f4,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766112626299280408,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cbe8a0dd-5db0-4c8a-a076-c2b0c28c1f82,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91e758f1aacca078e5d5e4ade2daaf122c267a8da1977c176e140be8c0f6b5b8,PodSandboxId:e918007929d35b5cb8e3dc0752427803b8496d458837ae4795f9a999b104b7c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766112538553240558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vbbbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6293c621-6771-4f49-82da-b9855c06b18c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41e37e14f858d2395248ad7cc9a1110682e9292d64ed829cdfdb3235af49eb4,PodSandboxId:a92b49d24460aa320c25e155d6caf6b08b0235c1544469c7b8221a11f487c3b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766112538232052310,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfp8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5822977e-9a2f-4ca6-8cb6-26f161c23791,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b9d4f0960c05f3f429e70c4f2fe9f0011729c50b33da6bd7b3b81b3de517ab3,PodSandboxId:70ef0261fbcedcbc5ae3230cf923a4dfffb20600a7e4151d688f1fb654ccf238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766112538192380480,Labels:map[string]string{io.kubernetes.
container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd92a8c6-e659-4184-bcbd-43da477075c7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6831dc2bcef8d4b1a94250aed36dd9fb44a05ea80016e3768e48123a8e1f6c,PodSandboxId:35cb157095476b02b0abeaf08d488668f96c9e0d4728acd3994b964c3700b468,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766112534452951956,Labels:map[string]string{io.kubernetes.container.name: k
ube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b4be683f1b2146cda99455d27828d0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7a42f7becb9ee6e394e4711ce0d939c670b9b86111d9eacaf91742d4b61876,PodSandboxId:d02d27fb02ddf4040b24d90914f56b9a7e2f13861d573d1b8756bef024538549,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,
State:CONTAINER_RUNNING,CreatedAt:1766112534422616995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 711dafb450914341c18eb7924278c787,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ae4a05d12afd9d8d285abf18d51893a511feb65943a48d1182033fbca6b790,PodSandboxId:e27f37b5433ef140fc0770f3aa8fad8c203db6e4c12219c4ecf0d9f7e70d6d3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766112534406923517,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c78855ae3727af10a88609c41e957243,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae201f753e2fd25d72bcc3e9cb3882717e0a242162a84fba4346388878190fb2,PodSandboxId:f9bc0a0f70474fa6cc1046f5edb3fb37d910306f5e376c522bec7f7178b89806,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9
bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766112534388772204,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24264402a1697b7f5edbdf8e719b1ae,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7791700eee60709360756b6457c1f39022bd70f852043f8fc410912b7eac51,PodSandboxId:96419d7b2a3c490726e8415d3406d3339f
f543893b9fcf82f96df0c362123b09,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1766112496013321509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vbbbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6293c621-6771-4f49-82da-b9855c06b18c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\
"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6d164603577eba196a03571377b050885615eb2c1f5bc8cd2b3648f7365509,PodSandboxId:9589988aeaee6b32374ef45e84566e546064d358ea05730fe94ccb2632816218,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1766112495721931029,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfp8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5822977e-9a2f-4ca6-8cb6-26f161c23791,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c562b08e9b8503c78d017881ca66b665a54239ea96beb94ed77195ee8e7043,PodSandboxId:ace9301814f0ffaaec2e3bacca3662ed1afc226f3f5ce4e68dcb977e524983ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766112495709625548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd92a8c6-e659-4184-bcbd-43da477075c7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbc3f93240053c62c6dd367d02faccdf3daa42629940378a9c34a13fcb58be8,PodSandboxId:c6fec3d2091daa0779891e3a982fd27bf9b549e96b7343ef851e6090bd001aaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1766112491916767855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b4be683f1b2146cda99455d27828d0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",
\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30141c2d08a9d0bc12975b94953d7e149695bd82c65e81992aea6fcea340cc,PodSandboxId:3032bc4eef53c29fcaa42cae1398d811952a3c3289a6e573a2d367833dae7cd2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1766112491863558805,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24264402a1697b7f5edbdf8e
719b1ae,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baa578ff64f7266f8aec4a6a718785e4c57c078f37fb762460df66a5ae091dc9,PodSandboxId:d38d8b5ae41c4328ced9df76847a7d97a43a0851e2cc16e31fd4bc1c4e24c235,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1766112491815245134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-936345,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: c78855ae3727af10a88609c41e957243,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07619e23-e2d0-4d7d-8f70-a4b304bd47d1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.476435168Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4fa860a-c952-4429-8fef-21e306be96c1 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.476562593Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4fa860a-c952-4429-8fef-21e306be96c1 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.478011631Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6b5822c-296b-4ef0-a4a0-e75f17b25b60 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.478684884Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766112861478664234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:174538,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6b5822c-296b-4ef0-a4a0-e75f17b25b60 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.479544114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf26eafa-7abe-4b59-bf30-9a9b3e123455 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.479693452Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf26eafa-7abe-4b59-bf30-9a9b3e123455 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:54:21 functional-936345 crio[5228]: time="2025-12-19 02:54:21.480277467Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e1a5f3eb5f526298bd503d941552876969e9e0556abcf372f10b81ea35a6035,PodSandboxId:12bacd69cf5382b27b9a90d143232090d7833ad686639b002af88757f19da5f4,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766112626299280408,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cbe8a0dd-5db0-4c8a-a076-c2b0c28c1f82,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91e758f1aacca078e5d5e4ade2daaf122c267a8da1977c176e140be8c0f6b5b8,PodSandboxId:e918007929d35b5cb8e3dc0752427803b8496d458837ae4795f9a999b104b7c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766112538553240558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vbbbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6293c621-6771-4f49-82da-b9855c06b18c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41e37e14f858d2395248ad7cc9a1110682e9292d64ed829cdfdb3235af49eb4,PodSandboxId:a92b49d24460aa320c25e155d6caf6b08b0235c1544469c7b8221a11f487c3b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766112538232052310,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfp8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5822977e-9a2f-4ca6-8cb6-26f161c23791,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b9d4f0960c05f3f429e70c4f2fe9f0011729c50b33da6bd7b3b81b3de517ab3,PodSandboxId:70ef0261fbcedcbc5ae3230cf923a4dfffb20600a7e4151d688f1fb654ccf238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766112538192380480,Labels:map[string]string{io.kubernetes.
container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd92a8c6-e659-4184-bcbd-43da477075c7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6831dc2bcef8d4b1a94250aed36dd9fb44a05ea80016e3768e48123a8e1f6c,PodSandboxId:35cb157095476b02b0abeaf08d488668f96c9e0d4728acd3994b964c3700b468,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766112534452951956,Labels:map[string]string{io.kubernetes.container.name: k
ube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b4be683f1b2146cda99455d27828d0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7a42f7becb9ee6e394e4711ce0d939c670b9b86111d9eacaf91742d4b61876,PodSandboxId:d02d27fb02ddf4040b24d90914f56b9a7e2f13861d573d1b8756bef024538549,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,
State:CONTAINER_RUNNING,CreatedAt:1766112534422616995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 711dafb450914341c18eb7924278c787,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ae4a05d12afd9d8d285abf18d51893a511feb65943a48d1182033fbca6b790,PodSandboxId:e27f37b5433ef140fc0770f3aa8fad8c203db6e4c12219c4ecf0d9f7e70d6d3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766112534406923517,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c78855ae3727af10a88609c41e957243,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae201f753e2fd25d72bcc3e9cb3882717e0a242162a84fba4346388878190fb2,PodSandboxId:f9bc0a0f70474fa6cc1046f5edb3fb37d910306f5e376c522bec7f7178b89806,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9
bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766112534388772204,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24264402a1697b7f5edbdf8e719b1ae,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7791700eee60709360756b6457c1f39022bd70f852043f8fc410912b7eac51,PodSandboxId:96419d7b2a3c490726e8415d3406d3339f
f543893b9fcf82f96df0c362123b09,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1766112496013321509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vbbbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6293c621-6771-4f49-82da-b9855c06b18c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\
"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6d164603577eba196a03571377b050885615eb2c1f5bc8cd2b3648f7365509,PodSandboxId:9589988aeaee6b32374ef45e84566e546064d358ea05730fe94ccb2632816218,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1766112495721931029,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfp8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5822977e-9a2f-4ca6-8cb6-26f161c23791,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c562b08e9b8503c78d017881ca66b665a54239ea96beb94ed77195ee8e7043,PodSandboxId:ace9301814f0ffaaec2e3bacca3662ed1afc226f3f5ce4e68dcb977e524983ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766112495709625548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd92a8c6-e659-4184-bcbd-43da477075c7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbc3f93240053c62c6dd367d02faccdf3daa42629940378a9c34a13fcb58be8,PodSandboxId:c6fec3d2091daa0779891e3a982fd27bf9b549e96b7343ef851e6090bd001aaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1766112491916767855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b4be683f1b2146cda99455d27828d0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",
\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30141c2d08a9d0bc12975b94953d7e149695bd82c65e81992aea6fcea340cc,PodSandboxId:3032bc4eef53c29fcaa42cae1398d811952a3c3289a6e573a2d367833dae7cd2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1766112491863558805,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24264402a1697b7f5edbdf8e
719b1ae,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baa578ff64f7266f8aec4a6a718785e4c57c078f37fb762460df66a5ae091dc9,PodSandboxId:d38d8b5ae41c4328ced9df76847a7d97a43a0851e2cc16e31fd4bc1c4e24c235,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1766112491815245134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-936345,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: c78855ae3727af10a88609c41e957243,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf26eafa-7abe-4b59-bf30-9a9b3e123455 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	6e1a5f3eb5f52       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   3 minutes ago       Exited              mount-munger              0                   12bacd69cf538       busybox-mount                               default
	91e758f1aacca       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      5 minutes ago       Running             coredns                   2                   e918007929d35       coredns-7d764666f9-vbbbl                    kube-system
	f41e37e14f858       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                      5 minutes ago       Running             kube-proxy                2                   a92b49d24460a       kube-proxy-lfp8r                            kube-system
	9b9d4f0960c05       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       2                   70ef0261fbced       storage-provisioner                         kube-system
	1a6831dc2bcef       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                      5 minutes ago       Running             kube-scheduler            2                   35cb157095476       kube-scheduler-functional-936345            kube-system
	9f7a42f7becb9       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                      5 minutes ago       Running             kube-apiserver            0                   d02d27fb02ddf       kube-apiserver-functional-936345            kube-system
	d4ae4a05d12af       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      5 minutes ago       Running             etcd                      2                   e27f37b5433ef       etcd-functional-936345                      kube-system
	ae201f753e2fd       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                      5 minutes ago       Running             kube-controller-manager   2                   f9bc0a0f70474       kube-controller-manager-functional-936345   kube-system
	3a7791700eee6       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      6 minutes ago       Exited              coredns                   1                   96419d7b2a3c4       coredns-7d764666f9-vbbbl                    kube-system
	cf6d164603577       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                      6 minutes ago       Exited              kube-proxy                1                   9589988aeaee6       kube-proxy-lfp8r                            kube-system
	55c562b08e9b8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Exited              storage-provisioner       1                   ace9301814f0f       storage-provisioner                         kube-system
	5cbc3f9324005       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                      6 minutes ago       Exited              kube-scheduler            1                   c6fec3d2091da       kube-scheduler-functional-936345            kube-system
	0c30141c2d08a       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                      6 minutes ago       Exited              kube-controller-manager   1                   3032bc4eef53c       kube-controller-manager-functional-936345   kube-system
	baa578ff64f72       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      6 minutes ago       Exited              etcd                      1                   d38d8b5ae41c4       etcd-functional-936345                      kube-system
	
	
	==> coredns [3a7791700eee60709360756b6457c1f39022bd70f852043f8fc410912b7eac51] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:45323 - 19793 "HINFO IN 4612485286517945921.2260184995115770894. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030459974s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [91e758f1aacca078e5d5e4ade2daaf122c267a8da1977c176e140be8c0f6b5b8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:41838 - 52153 "HINFO IN 4280416885053973904.1603704149913353446. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024671331s
	
	
	==> describe nodes <==
	Name:               functional-936345
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-936345
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=functional-936345
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T02_47_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 02:47:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-936345
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 02:54:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 02:50:59 +0000   Fri, 19 Dec 2025 02:47:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 02:50:59 +0000   Fri, 19 Dec 2025 02:47:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 02:50:59 +0000   Fri, 19 Dec 2025 02:47:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 02:50:59 +0000   Fri, 19 Dec 2025 02:47:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    functional-936345
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8b3017e61d64166941fb065e0138897
	  System UUID:                a8b3017e-61d6-4166-941f-b065e0138897
	  Boot ID:                    abf60ed3-98d5-481b-b193-2b3182ae8fc7
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-sxs2v                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  default                     hello-node-connect-9f67c86d4-wg5l4                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  default                     mysql-7d7b65bc95-c6dqh                                   600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    3m42s
	  kube-system                 coredns-7d764666f9-vbbbl                                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m5s
	  kube-system                 etcd-functional-936345                                   100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m11s
	  kube-system                 kube-apiserver-functional-936345                         250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-controller-manager-functional-936345                200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-proxy-lfp8r                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 kube-scheduler-functional-936345                         100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m4s
	  kubernetes-dashboard        kubernetes-dashboard-api-6f6bc9c789-nhrc9                100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    4m57s
	  kubernetes-dashboard        kubernetes-dashboard-auth-7d44b44fcf-wk8mz               100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    4m57s
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-tkd7b               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj    100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    4m57s
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-gldgt                100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    4m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1750m (87%)   1700m (85%)
	  memory             1482Mi (37%)  2470Mi (63%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  7m6s   node-controller  Node functional-936345 event: Registered Node functional-936345 in Controller
	  Normal  RegisteredNode  6m4s   node-controller  Node functional-936345 event: Registered Node functional-936345 in Controller
	  Normal  RegisteredNode  5m21s  node-controller  Node functional-936345 event: Registered Node functional-936345 in Controller
	
	
	==> dmesg <==
	[Dec19 02:46] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001508] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002152] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.177929] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089908] kauditd_printk_skb: 1 callbacks suppressed
	[Dec19 02:47] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.120827] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.207935] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.030152] kauditd_printk_skb: 236 callbacks suppressed
	[ +35.965741] kauditd_printk_skb: 45 callbacks suppressed
	[Dec19 02:48] kauditd_printk_skb: 242 callbacks suppressed
	[  +8.572675] kauditd_printk_skb: 131 callbacks suppressed
	[  +0.105882] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.902513] kauditd_printk_skb: 78 callbacks suppressed
	[  +4.545680] kauditd_printk_skb: 170 callbacks suppressed
	[Dec19 02:49] kauditd_printk_skb: 131 callbacks suppressed
	[  +0.662586] kauditd_printk_skb: 169 callbacks suppressed
	[  +3.856516] kauditd_printk_skb: 32 callbacks suppressed
	[ +26.918937] kauditd_printk_skb: 182 callbacks suppressed
	[Dec19 02:50] kauditd_printk_skb: 29 callbacks suppressed
	[Dec19 02:51] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [baa578ff64f7266f8aec4a6a718785e4c57c078f37fb762460df66a5ae091dc9] <==
	{"level":"info","ts":"2025-12-19T02:48:13.154917Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T02:48:13.155032Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T02:48:13.155088Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T02:48:13.156237Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:48:13.157430Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:48:13.158447Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T02:48:13.158630Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.80:2379"}
	{"level":"info","ts":"2025-12-19T02:48:40.232114Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-19T02:48:40.232178Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-936345","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.80:2380"],"advertise-client-urls":["https://192.168.39.80:2379"]}
	{"level":"error","ts":"2025-12-19T02:48:40.232249Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:48:40.319148Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:48:40.319221Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:48:40.319251Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d33e7f1dba1e46ae","current-leader-member-id":"d33e7f1dba1e46ae"}
	{"level":"info","ts":"2025-12-19T02:48:40.319322Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-19T02:48:40.319353Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-19T02:48:40.319346Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:48:40.319584Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T02:48:40.319613Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-19T02:48:40.319428Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.80:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:48:40.319625Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.80:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T02:48:40.319631Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.80:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:48:40.322729Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.80:2380"}
	{"level":"error","ts":"2025-12-19T02:48:40.322871Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.80:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:48:40.322895Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.80:2380"}
	{"level":"info","ts":"2025-12-19T02:48:40.322995Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-936345","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.80:2380"],"advertise-client-urls":["https://192.168.39.80:2379"]}
	
	
	==> etcd [d4ae4a05d12afd9d8d285abf18d51893a511feb65943a48d1182033fbca6b790] <==
	{"level":"info","ts":"2025-12-19T02:48:54.841194Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-19T02:48:54.841240Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-19T02:48:54.834939Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"d33e7f1dba1e46ae switched to configuration voters=(15221743556212180654)"}
	{"level":"info","ts":"2025-12-19T02:48:54.841361Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"e6a6fd39da75dc67","local-member-id":"d33e7f1dba1e46ae","added-peer-id":"d33e7f1dba1e46ae","added-peer-peer-urls":["https://192.168.39.80:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-19T02:48:54.834985Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.80:2380"}
	{"level":"info","ts":"2025-12-19T02:48:54.841542Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.80:2380"}
	{"level":"info","ts":"2025-12-19T02:48:54.841469Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"e6a6fd39da75dc67","local-member-id":"d33e7f1dba1e46ae","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-19T02:48:55.689046Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"d33e7f1dba1e46ae is starting a new election at term 3"}
	{"level":"info","ts":"2025-12-19T02:48:55.689085Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"d33e7f1dba1e46ae became pre-candidate at term 3"}
	{"level":"info","ts":"2025-12-19T02:48:55.689119Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"d33e7f1dba1e46ae received MsgPreVoteResp from d33e7f1dba1e46ae at term 3"}
	{"level":"info","ts":"2025-12-19T02:48:55.689129Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"d33e7f1dba1e46ae has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T02:48:55.689142Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"d33e7f1dba1e46ae became candidate at term 4"}
	{"level":"info","ts":"2025-12-19T02:48:55.693750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"d33e7f1dba1e46ae received MsgVoteResp from d33e7f1dba1e46ae at term 4"}
	{"level":"info","ts":"2025-12-19T02:48:55.693775Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"d33e7f1dba1e46ae has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T02:48:55.693827Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"d33e7f1dba1e46ae became leader at term 4"}
	{"level":"info","ts":"2025-12-19T02:48:55.693836Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: d33e7f1dba1e46ae elected leader d33e7f1dba1e46ae at term 4"}
	{"level":"info","ts":"2025-12-19T02:48:55.695266Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"d33e7f1dba1e46ae","local-member-attributes":"{Name:functional-936345 ClientURLs:[https://192.168.39.80:2379]}","cluster-id":"e6a6fd39da75dc67","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T02:48:55.695282Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T02:48:55.695423Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T02:48:55.695568Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T02:48:55.695581Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T02:48:55.696535Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:48:55.696663Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:48:55.700154Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T02:48:55.700509Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.80:2379"}
	
	
	==> kernel <==
	 02:54:21 up 7 min,  0 users,  load average: 0.04, 0.20, 0.12
	Linux functional-936345 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [9f7a42f7becb9ee6e394e4711ce0d939c670b9b86111d9eacaf91742d4b61876] <==
	I1219 02:49:22.403015       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:49:22.428511       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 02:49:22.447047       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 02:49:22.461676       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:49:22.471372       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:49:22.481876       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:49:24.784669       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 02:49:24.874735       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.99.194.156"}
	I1219 02:49:24.883995       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.105.202.148"}
	I1219 02:49:24.894415       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.171.21"}
	I1219 02:49:24.905243       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.97.127.37"}
	I1219 02:49:24.915585       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.106.84.86"}
	W1219 02:49:30.118472       1 logging.go:55] [core] [Channel #262 SubChannel #263]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.137352       1 logging.go:55] [core] [Channel #266 SubChannel #267]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.148888       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 02:49:30.164574       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.186134       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.222331       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.249253       1 logging.go:55] [core] [Channel #286 SubChannel #287]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.265975       1 logging.go:55] [core] [Channel #290 SubChannel #291]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.275854       1 logging.go:55] [core] [Channel #294 SubChannel #295]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.285561       1 logging.go:55] [core] [Channel #298 SubChannel #299]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 02:49:30.297257       1 logging.go:55] [core] [Channel #302 SubChannel #303]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.312093       1 logging.go:55] [core] [Channel #306 SubChannel #307]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1219 02:50:39.580243       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.214.188"}
	
	
	==> kube-controller-manager [0c30141c2d08a9d0bc12975b94953d7e149695bd82c65e81992aea6fcea340cc] <==
	I1219 02:48:17.540973       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.541104       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.541169       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.541134       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.541644       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.541838       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.541906       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.542026       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.542446       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.542474       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.542991       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.543035       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.543522       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.543645       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.544248       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.545422       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.550778       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.550872       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.550905       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 02:48:17.550910       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 02:48:17.550815       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.550838       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.550844       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.566419       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.620951       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [ae201f753e2fd25d72bcc3e9cb3882717e0a242162a84fba4346388878190fb2] <==
	I1219 02:49:00.081729       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:00.082216       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:00.083156       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:00.083357       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:00.078579       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:00.090659       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:49:00.107490       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:00.178934       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:00.178949       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 02:49:00.178953       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 02:49:00.191557       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:00.534378       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1219 02:49:30.113401       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongcustomentities.configuration.konghq.com"
	I1219 02:49:30.113472       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongconsumers.configuration.konghq.com"
	I1219 02:49:30.113504       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongingresses.configuration.konghq.com"
	I1219 02:49:30.113529       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongupstreampolicies.configuration.konghq.com"
	I1219 02:49:30.113560       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="ingressclassparameterses.configuration.konghq.com"
	I1219 02:49:30.113588       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongplugins.configuration.konghq.com"
	I1219 02:49:30.113619       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="tcpingresses.configuration.konghq.com"
	I1219 02:49:30.113641       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongconsumergroups.configuration.konghq.com"
	I1219 02:49:30.113678       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="udpingresses.configuration.konghq.com"
	I1219 02:49:30.113842       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:49:30.209012       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:49:31.314654       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:31.412458       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [cf6d164603577eba196a03571377b050885615eb2c1f5bc8cd2b3648f7365509] <==
	I1219 02:48:16.089375       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:48:16.190391       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:16.190434       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.80"]
	E1219 02:48:16.190534       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:48:16.235930       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 02:48:16.235982       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 02:48:16.236003       1 server_linux.go:136] "Using iptables Proxier"
	I1219 02:48:16.249667       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:48:16.250956       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 02:48:16.250969       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:48:16.256445       1 config.go:200] "Starting service config controller"
	I1219 02:48:16.256456       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:48:16.256471       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:48:16.256475       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:48:16.256485       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:48:16.256488       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:48:16.259898       1 config.go:309] "Starting node config controller"
	I1219 02:48:16.259923       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:48:16.356895       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:48:16.356971       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 02:48:16.357171       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:48:16.360722       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [f41e37e14f858d2395248ad7cc9a1110682e9292d64ed829cdfdb3235af49eb4] <==
	I1219 02:48:58.624323       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:48:58.724726       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:58.724765       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.80"]
	E1219 02:48:58.724874       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:48:58.780747       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 02:48:58.780861       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 02:48:58.780882       1 server_linux.go:136] "Using iptables Proxier"
	I1219 02:48:58.789273       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:48:58.790060       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 02:48:58.790086       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:48:58.792528       1 config.go:200] "Starting service config controller"
	I1219 02:48:58.797870       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:48:58.798070       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:48:58.798097       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:48:58.798122       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:48:58.798141       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:48:58.800639       1 config.go:309] "Starting node config controller"
	I1219 02:48:58.800680       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:48:58.800697       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:48:58.904091       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:48:58.909263       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:48:58.913162       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1a6831dc2bcef8d4b1a94250aed36dd9fb44a05ea80016e3768e48123a8e1f6c] <==
	I1219 02:48:54.971108       1 serving.go:386] Generated self-signed cert in-memory
	W1219 02:48:56.866286       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 02:48:56.866475       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 02:48:56.866502       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 02:48:56.866579       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 02:48:56.920033       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 02:48:56.920064       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:48:56.932869       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:48:56.932929       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 02:48:56.933040       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 02:48:56.932961       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:48:57.033570       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [5cbc3f93240053c62c6dd367d02faccdf3daa42629940378a9c34a13fcb58be8] <==
	W1219 02:48:14.280113       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 02:48:14.280123       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 02:48:14.280129       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 02:48:14.333945       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 02:48:14.334014       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:48:14.342056       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 02:48:14.345098       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:48:14.345131       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:48:14.345152       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1219 02:48:14.426333       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1219 02:48:14.427239       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1219 02:48:14.427692       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1219 02:48:14.427758       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1219 02:48:14.427827       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1219 02:48:14.428293       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1219 02:48:14.428340       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1219 02:48:14.428441       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1219 02:48:14.437630       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	I1219 02:48:14.446950       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:40.240013       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1219 02:48:40.240095       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1219 02:48:40.240115       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1219 02:48:40.240278       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:48:40.240641       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1219 02:48:40.240657       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 19 02:53:42 functional-936345 kubelet[5589]: E1219 02:53:42.225610    5589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj" podUID="7abd05cf-cbdf-4194-be74-e3dc8d750e08"
	Dec 19 02:53:42 functional-936345 kubelet[5589]: E1219 02:53:42.435046    5589 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 02:53:42 functional-936345 kubelet[5589]: E1219 02:53:42.438598    5589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj" podUID="7abd05cf-cbdf-4194-be74-e3dc8d750e08"
	Dec 19 02:53:43 functional-936345 kubelet[5589]: E1219 02:53:43.875603    5589 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766112823875354454  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:174538}  inodes_used:{value:87}}"
	Dec 19 02:53:43 functional-936345 kubelet[5589]: E1219 02:53:43.875628    5589 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766112823875354454  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:174538}  inodes_used:{value:87}}"
	Dec 19 02:53:53 functional-936345 kubelet[5589]: E1219 02:53:53.704045    5589 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-936345" containerName="etcd"
	Dec 19 02:53:53 functional-936345 kubelet[5589]: E1219 02:53:53.798541    5589 manager.go:1119] Failed to create existing container: /kubepods/burstable/pode24264402a1697b7f5edbdf8e719b1ae/crio-3032bc4eef53c29fcaa42cae1398d811952a3c3289a6e573a2d367833dae7cd2: Error finding container 3032bc4eef53c29fcaa42cae1398d811952a3c3289a6e573a2d367833dae7cd2: Status 404 returned error can't find the container with id 3032bc4eef53c29fcaa42cae1398d811952a3c3289a6e573a2d367833dae7cd2
	Dec 19 02:53:53 functional-936345 kubelet[5589]: E1219 02:53:53.798746    5589 manager.go:1119] Failed to create existing container: /kubepods/burstable/podc78855ae3727af10a88609c41e957243/crio-d38d8b5ae41c4328ced9df76847a7d97a43a0851e2cc16e31fd4bc1c4e24c235: Error finding container d38d8b5ae41c4328ced9df76847a7d97a43a0851e2cc16e31fd4bc1c4e24c235: Status 404 returned error can't find the container with id d38d8b5ae41c4328ced9df76847a7d97a43a0851e2cc16e31fd4bc1c4e24c235
	Dec 19 02:53:53 functional-936345 kubelet[5589]: E1219 02:53:53.799055    5589 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod5822977e-9a2f-4ca6-8cb6-26f161c23791/crio-9589988aeaee6b32374ef45e84566e546064d358ea05730fe94ccb2632816218: Error finding container 9589988aeaee6b32374ef45e84566e546064d358ea05730fe94ccb2632816218: Status 404 returned error can't find the container with id 9589988aeaee6b32374ef45e84566e546064d358ea05730fe94ccb2632816218
	Dec 19 02:53:53 functional-936345 kubelet[5589]: E1219 02:53:53.799247    5589 manager.go:1119] Failed to create existing container: /kubepods/besteffort/podcd92a8c6-e659-4184-bcbd-43da477075c7/crio-ace9301814f0ffaaec2e3bacca3662ed1afc226f3f5ce4e68dcb977e524983ae: Error finding container ace9301814f0ffaaec2e3bacca3662ed1afc226f3f5ce4e68dcb977e524983ae: Status 404 returned error can't find the container with id ace9301814f0ffaaec2e3bacca3662ed1afc226f3f5ce4e68dcb977e524983ae
	Dec 19 02:53:53 functional-936345 kubelet[5589]: E1219 02:53:53.799370    5589 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod6293c621-6771-4f49-82da-b9855c06b18c/crio-96419d7b2a3c490726e8415d3406d3339ff543893b9fcf82f96df0c362123b09: Error finding container 96419d7b2a3c490726e8415d3406d3339ff543893b9fcf82f96df0c362123b09: Status 404 returned error can't find the container with id 96419d7b2a3c490726e8415d3406d3339ff543893b9fcf82f96df0c362123b09
	Dec 19 02:53:53 functional-936345 kubelet[5589]: E1219 02:53:53.799618    5589 manager.go:1119] Failed to create existing container: /kubepods/burstable/podf2b4be683f1b2146cda99455d27828d0/crio-c6fec3d2091daa0779891e3a982fd27bf9b549e96b7343ef851e6090bd001aaa: Error finding container c6fec3d2091daa0779891e3a982fd27bf9b549e96b7343ef851e6090bd001aaa: Status 404 returned error can't find the container with id c6fec3d2091daa0779891e3a982fd27bf9b549e96b7343ef851e6090bd001aaa
	Dec 19 02:53:53 functional-936345 kubelet[5589]: E1219 02:53:53.876945    5589 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766112833876588409  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:174538}  inodes_used:{value:87}}"
	Dec 19 02:53:53 functional-936345 kubelet[5589]: E1219 02:53:53.876964    5589 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766112833876588409  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:174538}  inodes_used:{value:87}}"
	Dec 19 02:53:54 functional-936345 kubelet[5589]: E1219 02:53:54.703596    5589 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 02:53:58 functional-936345 kubelet[5589]: E1219 02:53:58.703613    5589 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-936345" containerName="kube-scheduler"
	Dec 19 02:54:03 functional-936345 kubelet[5589]: E1219 02:54:03.878842    5589 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766112843878552050  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:174538}  inodes_used:{value:87}}"
	Dec 19 02:54:03 functional-936345 kubelet[5589]: E1219 02:54:03.878861    5589 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766112843878552050  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:174538}  inodes_used:{value:87}}"
	Dec 19 02:54:06 functional-936345 kubelet[5589]: E1219 02:54:06.704002    5589 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-936345" containerName="kube-apiserver"
	Dec 19 02:54:13 functional-936345 kubelet[5589]: E1219 02:54:13.874915    5589 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 19 02:54:13 functional-936345 kubelet[5589]: E1219 02:54:13.874955    5589 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 19 02:54:13 functional-936345 kubelet[5589]: E1219 02:54:13.875195    5589 kuberuntime_manager.go:1664] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-9f67c86d4-wg5l4_default(6ffd5b41-1163-470d-8eca-617ef27bb37b): ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 19 02:54:13 functional-936345 kubelet[5589]: E1219 02:54:13.875230    5589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-wg5l4" podUID="6ffd5b41-1163-470d-8eca-617ef27bb37b"
	Dec 19 02:54:13 functional-936345 kubelet[5589]: E1219 02:54:13.881756    5589 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766112853880479957  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:174538}  inodes_used:{value:87}}"
	Dec 19 02:54:13 functional-936345 kubelet[5589]: E1219 02:54:13.881828    5589 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766112853880479957  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:174538}  inodes_used:{value:87}}"
	
	
	==> storage-provisioner [55c562b08e9b8503c78d017881ca66b665a54239ea96beb94ed77195ee8e7043] <==
	I1219 02:48:15.858877       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 02:48:15.888980       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 02:48:15.889024       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1219 02:48:15.896113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:19.353731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:23.614534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:27.212955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:30.266366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:33.288567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:33.299877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 02:48:33.300148       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 02:48:33.300417       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-936345_fc50c04d-f684-4500-8968-a4dea833c4c5!
	I1219 02:48:33.300674       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85c46f16-8e78-4eed-b914-486f44a7c906", APIVersion:"v1", ResourceVersion:"544", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-936345_fc50c04d-f684-4500-8968-a4dea833c4c5 became leader
	W1219 02:48:33.306363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:33.316199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 02:48:33.402132       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-936345_fc50c04d-f684-4500-8968-a4dea833c4c5!
	W1219 02:48:35.319115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:35.333563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:37.337913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:37.343915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:39.347257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:39.354910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9b9d4f0960c05f3f429e70c4f2fe9f0011729c50b33da6bd7b3b81b3de517ab3] <==
	W1219 02:53:57.231307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:53:59.234868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:53:59.239953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:01.243514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:01.251135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:03.254859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:03.259425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:05.262623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:05.269854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:07.272472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:07.277312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:09.281211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:09.285457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:11.288189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:11.295453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:13.298167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:13.302325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:15.305858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:15.313035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:17.316635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:17.321674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:19.325534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:19.330329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:21.333563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:54:21.342376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-936345 -n functional-936345
helpers_test.go:270: (dbg) Run:  kubectl --context functional-936345 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-sxs2v hello-node-connect-9f67c86d4-wg5l4 mysql-7d7b65bc95-c6dqh kubernetes-dashboard-api-6f6bc9c789-nhrc9 kubernetes-dashboard-auth-7d44b44fcf-wk8mz kubernetes-dashboard-kong-78b7499b45-tkd7b kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj kubernetes-dashboard-web-7f7574785f-gldgt
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-936345 describe pod busybox-mount hello-node-5758569b79-sxs2v hello-node-connect-9f67c86d4-wg5l4 mysql-7d7b65bc95-c6dqh kubernetes-dashboard-api-6f6bc9c789-nhrc9 kubernetes-dashboard-auth-7d44b44fcf-wk8mz kubernetes-dashboard-kong-78b7499b45-tkd7b kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj kubernetes-dashboard-web-7f7574785f-gldgt
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-936345 describe pod busybox-mount hello-node-5758569b79-sxs2v hello-node-connect-9f67c86d4-wg5l4 mysql-7d7b65bc95-c6dqh kubernetes-dashboard-api-6f6bc9c789-nhrc9 kubernetes-dashboard-auth-7d44b44fcf-wk8mz kubernetes-dashboard-kong-78b7499b45-tkd7b kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj kubernetes-dashboard-web-7f7574785f-gldgt: exit status 1 (93.820458ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-936345/192.168.39.80
	Start Time:       Fri, 19 Dec 2025 02:49:20 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://6e1a5f3eb5f526298bd503d941552876969e9e0556abcf372f10b81ea35a6035
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 19 Dec 2025 02:50:26 +0000
	      Finished:     Fri, 19 Dec 2025 02:50:26 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pq8s9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-pq8s9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m2s   default-scheduler  Successfully assigned default/busybox-mount to functional-936345
	  Normal  Pulling    5m1s   kubelet            spec.containers{mount-munger}: Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m56s  kubelet            spec.containers{mount-munger}: Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.991s (1m5.047s including waiting). Image size: 4631262 bytes.
	  Normal  Created    3m56s  kubelet            spec.containers{mount-munger}: Container created
	  Normal  Started    3m56s  kubelet            spec.containers{mount-munger}: Container started
	
	
	Name:             hello-node-5758569b79-sxs2v
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-936345/192.168.39.80
	Start Time:       Fri, 19 Dec 2025 02:49:18 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ddmsm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ddmsm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m4s                  default-scheduler  Successfully assigned default/hello-node-5758569b79-sxs2v to functional-936345
	  Warning  Failed     3m59s                 kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m59s                 kubelet            spec.containers{echo-server}: Error: ErrImagePull
	  Normal   BackOff    3m59s                 kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     3m59s                 kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
	  Normal   Pulling    3m46s (x2 over 5m3s)  kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-wg5l4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-936345/192.168.39.80
	Start Time:       Fri, 19 Dec 2025 02:49:18 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-46q8d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-46q8d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m4s                  default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-wg5l4 to functional-936345
	  Warning  Failed     4m31s                 kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4m30s                 kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m30s                 kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
	  Normal   Pulling    4m20s (x2 over 5m3s)  kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
	  Warning  Failed     9s (x2 over 4m31s)    kubelet            spec.containers{echo-server}: Error: ErrImagePull
	  Warning  Failed     9s                    kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             mysql-7d7b65bc95-c6dqh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-936345/192.168.39.80
	Start Time:       Fri, 19 Dec 2025 02:50:39 +0000
	Labels:           app=mysql
	                  pod-template-hash=7d7b65bc95
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-7d7b65bc95
	Containers:
	  mysql:
	    Container ID:   
	    Image:          public.ecr.aws/docker/library/mysql:8.4
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kxs7d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kxs7d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m43s  default-scheduler  Successfully assigned default/mysql-7d7b65bc95-c6dqh to functional-936345
	  Normal  Pulling    3m42s  kubelet            spec.containers{mysql}: Pulling image "public.ecr.aws/docker/library/mysql:8.4"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-api-6f6bc9c789-nhrc9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-auth-7d44b44fcf-wk8mz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-kong-78b7499b45-tkd7b" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-web-7f7574785f-gldgt" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-936345 describe pod busybox-mount hello-node-5758569b79-sxs2v hello-node-connect-9f67c86d4-wg5l4 mysql-7d7b65bc95-c6dqh kubernetes-dashboard-api-6f6bc9c789-nhrc9 kubernetes-dashboard-auth-7d44b44fcf-wk8mz kubernetes-dashboard-kong-78b7499b45-tkd7b kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj kubernetes-dashboard-web-7f7574785f-gldgt: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (301.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (602.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-936345 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-936345 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-wg5l4" [6ffd5b41-1163-470d-8eca-617ef27bb37b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-936345 -n functional-936345
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-19 02:59:18.764198886 +0000 UTC m=+2061.802378201
functional_test.go:1645: (dbg) Run:  kubectl --context functional-936345 describe po hello-node-connect-9f67c86d4-wg5l4 -n default
functional_test.go:1645: (dbg) kubectl --context functional-936345 describe po hello-node-connect-9f67c86d4-wg5l4 -n default:
Name:             hello-node-connect-9f67c86d4-wg5l4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-936345/192.168.39.80
Start Time:       Fri, 19 Dec 2025 02:49:18 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-46q8d (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-46q8d:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-wg5l4 to functional-936345
Warning  Failed     9m27s                  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     5m5s                   kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    4m42s (x3 over 9m59s)  kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
Warning  Failed     22s (x3 over 9m27s)    kubelet            spec.containers{echo-server}: Error: ErrImagePull
Warning  Failed     22s                    kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    0s (x4 over 9m26s)     kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
Warning  Failed     0s (x4 over 9m26s)     kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-936345 logs hello-node-connect-9f67c86d4-wg5l4 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-936345 logs hello-node-connect-9f67c86d4-wg5l4 -n default: exit status 1 (77.540393ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-wg5l4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-936345 logs hello-node-connect-9f67c86d4-wg5l4 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-936345 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-wg5l4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-936345/192.168.39.80
Start Time:       Fri, 19 Dec 2025 02:49:18 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-46q8d (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-46q8d:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-wg5l4 to functional-936345
Warning  Failed     9m27s                  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     5m5s                   kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    4m42s (x3 over 9m59s)  kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
Warning  Failed     22s (x3 over 9m27s)    kubelet            spec.containers{echo-server}: Error: ErrImagePull
Warning  Failed     22s                    kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    0s (x4 over 9m26s)     kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
Warning  Failed     0s (x4 over 9m26s)     kubelet            spec.containers{echo-server}: Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-936345 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-936345 logs -l app=hello-node-connect: exit status 1 (66.11096ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-wg5l4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-936345 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-936345 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.98.148.31
IPs:                      10.98.148.31
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31224/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-936345 -n functional-936345
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-936345 logs -n 25: (1.230052811s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ license        │                                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ ssh            │ functional-936345 ssh sudo systemctl is-active docker                                                                                                        │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │                     │
	│ ssh            │ functional-936345 ssh sudo systemctl is-active containerd                                                                                                    │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │                     │
	│ ssh            │ functional-936345 ssh echo hello                                                                                                                             │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ ssh            │ functional-936345 ssh cat /etc/hostname                                                                                                                      │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ ssh            │ functional-936345 ssh sudo cat /etc/test/nested/copy/8937/hosts                                                                                              │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:50 UTC │ 19 Dec 25 02:50 UTC │
	│ cp             │ functional-936345 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │ 19 Dec 25 02:54 UTC │
	│ ssh            │ functional-936345 ssh -n functional-936345 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │ 19 Dec 25 02:54 UTC │
	│ cp             │ functional-936345 cp functional-936345:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm2796251211/001/cp-test.txt │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │ 19 Dec 25 02:54 UTC │
	│ ssh            │ functional-936345 ssh -n functional-936345 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │ 19 Dec 25 02:54 UTC │
	│ cp             │ functional-936345 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │ 19 Dec 25 02:54 UTC │
	│ ssh            │ functional-936345 ssh -n functional-936345 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │ 19 Dec 25 02:54 UTC │
	│ addons         │ functional-936345 addons list                                                                                                                                │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │ 19 Dec 25 02:55 UTC │
	│ addons         │ functional-936345 addons list -o json                                                                                                                        │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │ 19 Dec 25 02:55 UTC │
	│ image          │ functional-936345 image ls --format short --alsologtostderr                                                                                                  │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │ 19 Dec 25 02:55 UTC │
	│ image          │ functional-936345 image ls --format yaml --alsologtostderr                                                                                                   │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │ 19 Dec 25 02:55 UTC │
	│ ssh            │ functional-936345 ssh pgrep buildkitd                                                                                                                        │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │                     │
	│ image          │ functional-936345 image build -t localhost/my-image:functional-936345 testdata/build --alsologtostderr                                                       │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │ 19 Dec 25 02:55 UTC │
	│ image          │ functional-936345 image ls                                                                                                                                   │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │ 19 Dec 25 02:55 UTC │
	│ image          │ functional-936345 image ls --format json --alsologtostderr                                                                                                   │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │ 19 Dec 25 02:55 UTC │
	│ image          │ functional-936345 image ls --format table --alsologtostderr                                                                                                  │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │ 19 Dec 25 02:55 UTC │
	│ update-context │ functional-936345 update-context --alsologtostderr -v=2                                                                                                      │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │ 19 Dec 25 02:55 UTC │
	│ update-context │ functional-936345 update-context --alsologtostderr -v=2                                                                                                      │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │ 19 Dec 25 02:55 UTC │
	│ update-context │ functional-936345 update-context --alsologtostderr -v=2                                                                                                      │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │ 19 Dec 25 02:55 UTC │
	│ service        │ functional-936345 service list                                                                                                                               │ functional-936345 │ jenkins │ v1.37.0 │ 19 Dec 25 02:59 UTC │                     │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:49:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:49:20.613050   19647 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:49:20.613357   19647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:49:20.613374   19647 out.go:374] Setting ErrFile to fd 2...
	I1219 02:49:20.613381   19647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:49:20.613718   19647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 02:49:20.614304   19647 out.go:368] Setting JSON to false
	I1219 02:49:20.615290   19647 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1905,"bootTime":1766110656,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:49:20.615343   19647 start.go:143] virtualization: kvm guest
	I1219 02:49:20.617630   19647 out.go:179] * [functional-936345] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:49:20.618815   19647 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:49:20.618812   19647 notify.go:221] Checking for updates...
	I1219 02:49:20.621213   19647 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:49:20.622209   19647 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 02:49:20.623046   19647 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 02:49:20.624030   19647 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:49:20.624937   19647 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:49:20.626555   19647 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 02:49:20.627264   19647 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:49:20.657353   19647 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 02:49:20.658439   19647 start.go:309] selected driver: kvm2
	I1219 02:49:20.658450   19647 start.go:928] validating driver "kvm2" against &{Name:functional-936345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-936345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:49:20.658547   19647 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:49:20.659392   19647 cni.go:84] Creating CNI manager for ""
	I1219 02:49:20.659478   19647 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 02:49:20.659525   19647 start.go:353] cluster config:
	{Name:functional-936345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-936345 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:49:20.660700   19647 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.705674063Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0db09e281a9673507063c8d4523ab6a2d2ba640a8ffeaa5ee8537081a621d4e6,Metadata:&PodSandboxMetadata{Name:sp-pod,Uid:7e6b459d-dc6a-4646-a8d0-bca11659050f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766113097613985566,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e6b459d-dc6a-4646-a8d0-bca11659050f,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"public.ecr.aws/nginx/nginx:alpine\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\
"mypd\"}]}],\"volumes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2025-12-19T02:58:17.295689254Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:12d8e17e2885642262caf32253c2465fa887352df17f791e16b872c3fe3391b7,Metadata:&PodSandboxMetadata{Name:mysql-7d7b65bc95-c6dqh,Uid:367404f1-003b-464d-8fb8-d9c1dba5d64c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766112639972598985,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-7d7b65bc95-c6dqh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 367404f1-003b-464d-8fb8-d9c1dba5d64c,pod-template-hash: 7d7b65bc95,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T02:50:39.656300167Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b7678841ea6f3566a26ad53b9e5ee61d6e53e7657df7b89b05bcdcc25f99052,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-api-6f6bc9c789-nhrc9,Uid:9f32
2124-9781-49e4-962d-c22c6407c335,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766112565475236808,Labels:map[string]string{app.kubernetes.io/component: api,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/managed-by: Helm,app.kubernetes.io/name: kubernetes-dashboard-api,app.kubernetes.io/part-of: kubernetes-dashboard,app.kubernetes.io/version: 1.14.0,helm.sh/chart: kubernetes-dashboard-7.14.0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-api-6f6bc9c789-nhrc9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 9f322124-9781-49e4-962d-c22c6407c335,pod-template-hash: 6f6bc9c789,},Annotations:map[string]string{checksum/config: fa647d211d4c6cc619453a74062251f4313a607ea01bc9ec36e4dd7251e428e1,kubernetes.io/config.seen: 2025-12-19T02:49:25.096952272Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5899ab4e1d575e49c577b21b50a176843ff29602e135349aebaa67a495c53bd4,Metadata:&PodSandboxMetadata{Name:kub
ernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj,Uid:7abd05cf-cbdf-4194-be74-e3dc8d750e08,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766112565459047574,Labels:map[string]string{app.kubernetes.io/component: metrics-scraper,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/managed-by: Helm,app.kubernetes.io/name: kubernetes-dashboard-metrics-scraper,app.kubernetes.io/part-of: kubernetes-dashboard,app.kubernetes.io/version: 1.2.2,helm.sh/chart: kubernetes-dashboard-7.14.0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7abd05cf-cbdf-4194-be74-e3dc8d750e08,pod-template-hash: 594bbfb84b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T02:49:25.100057895Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fee42eb3293112b0edfcfff7a2af22a886c5465d4385deb60bb6652879b4cb9d,Metadata:&PodSandboxMetad
ata{Name:kubernetes-dashboard-auth-7d44b44fcf-wk8mz,Uid:49b98e8f-156f-4653-b538-d385e192d34e,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766112565392994599,Labels:map[string]string{app.kubernetes.io/component: auth,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/managed-by: Helm,app.kubernetes.io/name: kubernetes-dashboard-auth,app.kubernetes.io/part-of: kubernetes-dashboard,app.kubernetes.io/version: 1.4.0,helm.sh/chart: kubernetes-dashboard-7.14.0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-auth-7d44b44fcf-wk8mz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 49b98e8f-156f-4653-b538-d385e192d34e,pod-template-hash: 7d44b44fcf,},Annotations:map[string]string{checksum/config: d1b11ae01c085113ae469537149391338af8dbb78343573c767f286c08e207ba,kubernetes.io/config.seen: 2025-12-19T02:49:25.058464765Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cffc1f1defe052765f2e05589666a04deec294ef
f2fd4a84c8b2bfc56748d132,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-kong-78b7499b45-tkd7b,Uid:1b89b0f7-4865-4bfd-a624-78952381464e,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766112565390491832,Labels:map[string]string{app: kubernetes-dashboard-kong,app.kubernetes.io/component: app,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/managed-by: Helm,app.kubernetes.io/name: kong,app.kubernetes.io/version: 3.9,helm.sh/chart: kong-2.52.0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-tkd7b,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1b89b0f7-4865-4bfd-a624-78952381464e,pod-template-hash: 78b7499b45,version: 3.9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T02:49:25.055155450Z,kubernetes.io/config.source: api,kuma.io/gateway: enabled,kuma.io/service-account-token-volume: kubernetes-dashboard-kong-token,traffic.sidecar.istio.io/includeInboundPorts: ,},Runtime
Handler:,},&PodSandbox{Id:d2305c3257477e18e3e377d00d157dde939dbc2718439f5c440a31760608d8a5,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-web-7f7574785f-gldgt,Uid:2d51d168-b185-4ded-8b62-20e2baa4cd59,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766112565364260037,Labels:map[string]string{app.kubernetes.io/component: web,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/managed-by: Helm,app.kubernetes.io/name: kubernetes-dashboard-web,app.kubernetes.io/part-of: kubernetes-dashboard,app.kubernetes.io/version: 1.7.0,helm.sh/chart: kubernetes-dashboard-7.14.0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-web-7f7574785f-gldgt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 2d51d168-b185-4ded-8b62-20e2baa4cd59,pod-template-hash: 7f7574785f,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T02:49:25.049192374Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3274829b28
d2a3f4054d3820bb6a8e634d148385f4a4f4308235f21d559594ea,Metadata:&PodSandboxMetadata{Name:hello-node-5758569b79-sxs2v,Uid:d0ff39cd-f508-455e-9968-48efd92e3e42,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766112558788582014,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-5758569b79-sxs2v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0ff39cd-f508-455e-9968-48efd92e3e42,pod-template-hash: 5758569b79,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T02:49:18.447219206Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:01595f241b25fedef8cbece5080a25209e8e9d3f8b763995c8aea4e89232a3a4,Metadata:&PodSandboxMetadata{Name:hello-node-connect-9f67c86d4-wg5l4,Uid:6ffd5b41-1163-470d-8eca-617ef27bb37b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766112558721860179,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-c
onnect-9f67c86d4-wg5l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ffd5b41-1163-470d-8eca-617ef27bb37b,pod-template-hash: 9f67c86d4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T02:49:18.390567865Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e918007929d35b5cb8e3dc0752427803b8496d458837ae4795f9a999b104b7c6,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-vbbbl,Uid:6293c621-6771-4f49-82da-b9855c06b18c,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1766112538139401324,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-vbbbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6293c621-6771-4f49-82da-b9855c06b18c,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T02:48:57.652348091Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:70ef0261fbcedcbc5ae3230cf923a4dfffb20600a7e4151d688f1
fb654ccf238,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:cd92a8c6-e659-4184-bcbd-43da477075c7,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1766112538002005714,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd92a8c6-e659-4184-bcbd-43da477075c7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPa
th\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-19T02:48:57.652343648Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a92b49d24460aa320c25e155d6caf6b08b0235c1544469c7b8221a11f487c3b4,Metadata:&PodSandboxMetadata{Name:kube-proxy-lfp8r,Uid:5822977e-9a2f-4ca6-8cb6-26f161c23791,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1766112537999032411,Labels:map[string]string{controller-revision-hash: 57c97698cf,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lfp8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5822977e-9a2f-4ca6-8cb6-26f161c23791,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T02:48:57.652352444Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:35cb157095476b02b0
abeaf08d488668f96c9e0d4728acd3994b964c3700b468,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-936345,Uid:f2b4be683f1b2146cda99455d27828d0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1766112534179352033,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b4be683f1b2146cda99455d27828d0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f2b4be683f1b2146cda99455d27828d0,kubernetes.io/config.seen: 2025-12-19T02:48:53.649287718Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f9bc0a0f70474fa6cc1046f5edb3fb37d910306f5e376c522bec7f7178b89806,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-936345,Uid:e24264402a1697b7f5edbdf8e719b1ae,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1766112534160251265,Labels:map[string]string{component: kube-controll
er-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24264402a1697b7f5edbdf8e719b1ae,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e24264402a1697b7f5edbdf8e719b1ae,kubernetes.io/config.seen: 2025-12-19T02:48:53.649286967Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d02d27fb02ddf4040b24d90914f56b9a7e2f13861d573d1b8756bef024538549,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-936345,Uid:711dafb450914341c18eb7924278c787,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766112534154383669,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 711dafb450914341c18eb7924278c787,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiser
ver.advertise-address.endpoint: 192.168.39.80:8441,kubernetes.io/config.hash: 711dafb450914341c18eb7924278c787,kubernetes.io/config.seen: 2025-12-19T02:48:53.649285797Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e27f37b5433ef140fc0770f3aa8fad8c203db6e4c12219c4ecf0d9f7e70d6d3a,Metadata:&PodSandboxMetadata{Name:etcd-functional-936345,Uid:c78855ae3727af10a88609c41e957243,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1766112534149386057,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c78855ae3727af10a88609c41e957243,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.80:2379,kubernetes.io/config.hash: c78855ae3727af10a88609c41e957243,kubernetes.io/config.seen: 2025-12-19T02:48:53.649282754Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collect
or/interceptors.go:74" id=d9e2e834-8fcb-4928-b4c8-cc76c2b1f8d1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.707713470Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d372d30-7d4d-4f1d-b16c-b6bf41e7ad53 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.708058396Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d372d30-7d4d-4f1d-b16c-b6bf41e7ad53 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.709026183Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cdad69b8660e7421aa61d0ad9d0dd691f4761a7613f3c84f5b1eccca3afedafe,PodSandboxId:0db09e281a9673507063c8d4523ab6a2d2ba640a8ffeaa5ee8537081a621d4e6,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766113097827511145,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e6b459d-dc6a-4646-a8d0-bca11659050f,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a42f49a7d3fd0d37ede877b7569ee465595b5c72c8ee7259ed6349c84d7d4d74,PodSandboxId:12d8e17e2885642262caf32253c2465fa887352df17f791e16b872c3fe3391b7,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766112915176417353,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-c6dqh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 367404f1-003b-464d-8fb8-d9c1dba5d64c,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91e758f1aacca078e5d5e4ade2daaf122c267a8da1977c176e140be8c0f6b5b8,PodSandboxId:e918007929d35b5cb8e3dc0752427803b8496d458837ae4795f9a999b104b7c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766112538553240558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vbbbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6293c621-6771-4f49-82da-b9855c06b18c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container
.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41e37e14f858d2395248ad7cc9a1110682e9292d64ed829cdfdb3235af49eb4,PodSandboxId:a92b49d24460aa320c25e155d6caf6b08b0235c1544469c7b8221a11f487c3b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINE
R_RUNNING,CreatedAt:1766112538232052310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfp8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5822977e-9a2f-4ca6-8cb6-26f161c23791,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b9d4f0960c05f3f429e70c4f2fe9f0011729c50b33da6bd7b3b81b3de517ab3,PodSandboxId:70ef0261fbcedcbc5ae3230cf923a4dfffb20600a7e4151d688f1fb654ccf238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766
112538192380480,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd92a8c6-e659-4184-bcbd-43da477075c7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6831dc2bcef8d4b1a94250aed36dd9fb44a05ea80016e3768e48123a8e1f6c,PodSandboxId:35cb157095476b02b0abeaf08d488668f96c9e0d4728acd3994b964c3700b468,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766112534452951956,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b4be683f1b2146cda99455d27828d0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7a42f7becb9ee6e394e4711ce0d939c670b9b86111d9eacaf91742d4b61876,PodSandboxId:d02d27fb02ddf4040b24d90914f56b9a7e2f13861d573d1b8756bef024538549,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a1
3bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766112534422616995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 711dafb450914341c18eb7924278c787,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ae4a05d12afd9d8d285abf18d51893a511feb65943a48d1182033fbca6b790,PodSandboxId:e27f37b5433ef140fc0770f3aa8fad8c203db6e4c12219c4ecf0d9f7e70d6d3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d
6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766112534406923517,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c78855ae3727af10a88609c41e957243,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae201f753e2fd25d72bcc3e9cb3882717e0a242162a84fba4346388878190fb2,PodSandboxId:f9bc0a0f70474fa6cc1046f5edb3fb37d910306f5e376c522bec7f7178b89806,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766112534388772204,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24264402a1697b7f5edbdf8e719b1ae,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d372d30-7d4d-4f1d-b16
c-b6bf41e7ad53 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.710265953Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d091dae1-fb58-48b2-abc6-010530cad771 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.710471133Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d091dae1-fb58-48b2-abc6-010530cad771 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.714035750Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ee50176-e1a7-4ca9-b088-b1805360dbe4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.714767400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766113159714744779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242160,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ee50176-e1a7-4ca9-b088-b1805360dbe4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.716038500Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62e8333c-207c-42bc-9189-93ab1420a697 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.716129526Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62e8333c-207c-42bc-9189-93ab1420a697 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.716466628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cdad69b8660e7421aa61d0ad9d0dd691f4761a7613f3c84f5b1eccca3afedafe,PodSandboxId:0db09e281a9673507063c8d4523ab6a2d2ba640a8ffeaa5ee8537081a621d4e6,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766113097827511145,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e6b459d-dc6a-4646-a8d0-bca11659050f,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a42f49a7d3fd0d37ede877b7569ee465595b5c72c8ee7259ed6349c84d7d4d74,PodSandboxId:12d8e17e2885642262caf32253c2465fa887352df17f791e16b872c3fe3391b7,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766112915176417353,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-c6dqh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 367404f1-003b-464d-8fb8-d9c1dba5d64c,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1a5f3eb5f526298bd503d941552876969e9e0556abcf372f10b81ea35a6035,PodSandboxId:12bacd69cf5382b27b9a90d143232090d7833ad686639b002af88757f19da5f4,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766112626299280408,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cbe8a0dd-5db0-4c8a-a076-c2b0c28c1f82,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91e758f1aacca078e5d5e4ade2daaf122c267a8da1977c176e140be8c0f6b5b8,PodSandboxId:e918007929d35b5cb8e3dc0752427803b8496d458837ae4795f9a999b104b7c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766112538553240558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vbbbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6293c621-6771-4f49-82da-b9855c06b18c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41e37e14f858d2395248ad7cc9a1110682e9292d64ed829cdfdb3235af49eb4,PodSandboxId:a92b49d24460aa320c25e155d6caf6b08b0235c1544469c7b8221a11f487c3b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_
RUNNING,CreatedAt:1766112538232052310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfp8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5822977e-9a2f-4ca6-8cb6-26f161c23791,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b9d4f0960c05f3f429e70c4f2fe9f0011729c50b33da6bd7b3b81b3de517ab3,PodSandboxId:70ef0261fbcedcbc5ae3230cf923a4dfffb20600a7e4151d688f1fb654ccf238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:176611
2538192380480,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd92a8c6-e659-4184-bcbd-43da477075c7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6831dc2bcef8d4b1a94250aed36dd9fb44a05ea80016e3768e48123a8e1f6c,PodSandboxId:35cb157095476b02b0abeaf08d488668f96c9e0d4728acd3994b964c3700b468,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766112534452951956,Lab
els:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b4be683f1b2146cda99455d27828d0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7a42f7becb9ee6e394e4711ce0d939c670b9b86111d9eacaf91742d4b61876,PodSandboxId:d02d27fb02ddf4040b24d90914f56b9a7e2f13861d573d1b8756bef024538549,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13b
ccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766112534422616995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 711dafb450914341c18eb7924278c787,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ae4a05d12afd9d8d285abf18d51893a511feb65943a48d1182033fbca6b790,PodSandboxId:e27f37b5433ef140fc0770f3aa8fad8c203db6e4c12219c4ecf0d9f7e70d6d3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6f
f430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766112534406923517,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c78855ae3727af10a88609c41e957243,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae201f753e2fd25d72bcc3e9cb3882717e0a242162a84fba4346388878190fb2,PodSandboxId:f9bc0a0f70474fa6cc1046f5edb3fb37d910306f5e376c522bec7f7178b89806,Metadata:&ContainerMetadata{Name:kube-controller-mana
ger,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766112534388772204,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24264402a1697b7f5edbdf8e719b1ae,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7791700eee60709360756b6457c1f39022bd70f852043f8fc410912b7
eac51,PodSandboxId:96419d7b2a3c490726e8415d3406d3339ff543893b9fcf82f96df0c362123b09,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1766112496013321509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vbbbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6293c621-6771-4f49-82da-b9855c06b18c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readine
ss-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6d164603577eba196a03571377b050885615eb2c1f5bc8cd2b3648f7365509,PodSandboxId:9589988aeaee6b32374ef45e84566e546064d358ea05730fe94ccb2632816218,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1766112495721931029,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfp8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5822977e-9a2f-4ca6-8cb6-26f161c23791,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c562b08e9b8503c78d017881ca66b665a54239ea96beb94ed77195ee8e7043,PodSandboxId:ace9301814f0ffaaec2e3bacca3662ed1afc226f3f5ce4e68dcb977e524983ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766112495709625548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd92a8c6-e659-4184-bcbd-43da477075c7,},Annotations:map[string]string{io.kubernetes.container.hash:
6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbc3f93240053c62c6dd367d02faccdf3daa42629940378a9c34a13fcb58be8,PodSandboxId:c6fec3d2091daa0779891e3a982fd27bf9b549e96b7343ef851e6090bd001aaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1766112491916767855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b4be683f1b2146cda99455d27828d0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.k
ubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30141c2d08a9d0bc12975b94953d7e149695bd82c65e81992aea6fcea340cc,PodSandboxId:3032bc4eef53c29fcaa42cae1398d811952a3c3289a6e573a2d367833dae7cd2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1766112491863558805,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-936345,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: e24264402a1697b7f5edbdf8e719b1ae,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baa578ff64f7266f8aec4a6a718785e4c57c078f37fb762460df66a5ae091dc9,PodSandboxId:d38d8b5ae41c4328ced9df76847a7d97a43a0851e2cc16e31fd4bc1c4e24c235,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1766112491815245134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c78855ae3727af10a88609c41e957243,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62e8333c-207c-42bc-9189-93ab1420a697 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.755813930Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=243d4e9e-c018-4d9e-a9c4-eccc01cfea24 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.756045414Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=243d4e9e-c018-4d9e-a9c4-eccc01cfea24 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.757234524Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80ccdbf2-3232-48a2-abb3-f53d692a9d93 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.758074648Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766113159758055292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242160,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80ccdbf2-3232-48a2-abb3-f53d692a9d93 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.759959819Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ce16a0f-e01b-41eb-bd68-f1f96247e2a6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.760009375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ce16a0f-e01b-41eb-bd68-f1f96247e2a6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.760288309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cdad69b8660e7421aa61d0ad9d0dd691f4761a7613f3c84f5b1eccca3afedafe,PodSandboxId:0db09e281a9673507063c8d4523ab6a2d2ba640a8ffeaa5ee8537081a621d4e6,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766113097827511145,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e6b459d-dc6a-4646-a8d0-bca11659050f,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a42f49a7d3fd0d37ede877b7569ee465595b5c72c8ee7259ed6349c84d7d4d74,PodSandboxId:12d8e17e2885642262caf32253c2465fa887352df17f791e16b872c3fe3391b7,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766112915176417353,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-c6dqh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 367404f1-003b-464d-8fb8-d9c1dba5d64c,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1a5f3eb5f526298bd503d941552876969e9e0556abcf372f10b81ea35a6035,PodSandboxId:12bacd69cf5382b27b9a90d143232090d7833ad686639b002af88757f19da5f4,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766112626299280408,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cbe8a0dd-5db0-4c8a-a076-c2b0c28c1f82,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91e758f1aacca078e5d5e4ade2daaf122c267a8da1977c176e140be8c0f6b5b8,PodSandboxId:e918007929d35b5cb8e3dc0752427803b8496d458837ae4795f9a999b104b7c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766112538553240558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vbbbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6293c621-6771-4f49-82da-b9855c06b18c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41e37e14f858d2395248ad7cc9a1110682e9292d64ed829cdfdb3235af49eb4,PodSandboxId:a92b49d24460aa320c25e155d6caf6b08b0235c1544469c7b8221a11f487c3b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_
RUNNING,CreatedAt:1766112538232052310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfp8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5822977e-9a2f-4ca6-8cb6-26f161c23791,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b9d4f0960c05f3f429e70c4f2fe9f0011729c50b33da6bd7b3b81b3de517ab3,PodSandboxId:70ef0261fbcedcbc5ae3230cf923a4dfffb20600a7e4151d688f1fb654ccf238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:176611
2538192380480,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd92a8c6-e659-4184-bcbd-43da477075c7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6831dc2bcef8d4b1a94250aed36dd9fb44a05ea80016e3768e48123a8e1f6c,PodSandboxId:35cb157095476b02b0abeaf08d488668f96c9e0d4728acd3994b964c3700b468,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766112534452951956,Lab
els:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b4be683f1b2146cda99455d27828d0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7a42f7becb9ee6e394e4711ce0d939c670b9b86111d9eacaf91742d4b61876,PodSandboxId:d02d27fb02ddf4040b24d90914f56b9a7e2f13861d573d1b8756bef024538549,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13b
ccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766112534422616995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 711dafb450914341c18eb7924278c787,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ae4a05d12afd9d8d285abf18d51893a511feb65943a48d1182033fbca6b790,PodSandboxId:e27f37b5433ef140fc0770f3aa8fad8c203db6e4c12219c4ecf0d9f7e70d6d3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6f
f430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766112534406923517,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c78855ae3727af10a88609c41e957243,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae201f753e2fd25d72bcc3e9cb3882717e0a242162a84fba4346388878190fb2,PodSandboxId:f9bc0a0f70474fa6cc1046f5edb3fb37d910306f5e376c522bec7f7178b89806,Metadata:&ContainerMetadata{Name:kube-controller-mana
ger,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766112534388772204,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24264402a1697b7f5edbdf8e719b1ae,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7791700eee60709360756b6457c1f39022bd70f852043f8fc410912b7
eac51,PodSandboxId:96419d7b2a3c490726e8415d3406d3339ff543893b9fcf82f96df0c362123b09,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1766112496013321509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vbbbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6293c621-6771-4f49-82da-b9855c06b18c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readine
ss-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6d164603577eba196a03571377b050885615eb2c1f5bc8cd2b3648f7365509,PodSandboxId:9589988aeaee6b32374ef45e84566e546064d358ea05730fe94ccb2632816218,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1766112495721931029,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfp8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5822977e-9a2f-4ca6-8cb6-26f161c23791,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c562b08e9b8503c78d017881ca66b665a54239ea96beb94ed77195ee8e7043,PodSandboxId:ace9301814f0ffaaec2e3bacca3662ed1afc226f3f5ce4e68dcb977e524983ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766112495709625548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd92a8c6-e659-4184-bcbd-43da477075c7,},Annotations:map[string]string{io.kubernetes.container.hash:
6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbc3f93240053c62c6dd367d02faccdf3daa42629940378a9c34a13fcb58be8,PodSandboxId:c6fec3d2091daa0779891e3a982fd27bf9b549e96b7343ef851e6090bd001aaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1766112491916767855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b4be683f1b2146cda99455d27828d0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.k
ubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30141c2d08a9d0bc12975b94953d7e149695bd82c65e81992aea6fcea340cc,PodSandboxId:3032bc4eef53c29fcaa42cae1398d811952a3c3289a6e573a2d367833dae7cd2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1766112491863558805,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-936345,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: e24264402a1697b7f5edbdf8e719b1ae,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baa578ff64f7266f8aec4a6a718785e4c57c078f37fb762460df66a5ae091dc9,PodSandboxId:d38d8b5ae41c4328ced9df76847a7d97a43a0851e2cc16e31fd4bc1c4e24c235,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1766112491815245134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c78855ae3727af10a88609c41e957243,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ce16a0f-e01b-41eb-bd68-f1f96247e2a6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.786754021Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2d3320a-23c7-4508-993e-11afdbf233f5 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.787019776Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2d3320a-23c7-4508-993e-11afdbf233f5 name=/runtime.v1.RuntimeService/Version
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.788550816Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ddf36712-3269-4cb9-b399-82c78999a41a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.789505076Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766113159789485643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242160,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddf36712-3269-4cb9-b399-82c78999a41a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.790580173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=234d027f-b752-4e7d-a608-df7e7a02c8f3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.790905445Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=234d027f-b752-4e7d-a608-df7e7a02c8f3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 02:59:19 functional-936345 crio[5228]: time="2025-12-19 02:59:19.791228315Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cdad69b8660e7421aa61d0ad9d0dd691f4761a7613f3c84f5b1eccca3afedafe,PodSandboxId:0db09e281a9673507063c8d4523ab6a2d2ba640a8ffeaa5ee8537081a621d4e6,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766113097827511145,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e6b459d-dc6a-4646-a8d0-bca11659050f,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a42f49a7d3fd0d37ede877b7569ee465595b5c72c8ee7259ed6349c84d7d4d74,PodSandboxId:12d8e17e2885642262caf32253c2465fa887352df17f791e16b872c3fe3391b7,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766112915176417353,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-c6dqh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 367404f1-003b-464d-8fb8-d9c1dba5d64c,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1a5f3eb5f526298bd503d941552876969e9e0556abcf372f10b81ea35a6035,PodSandboxId:12bacd69cf5382b27b9a90d143232090d7833ad686639b002af88757f19da5f4,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766112626299280408,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cbe8a0dd-5db0-4c8a-a076-c2b0c28c1f82,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91e758f1aacca078e5d5e4ade2daaf122c267a8da1977c176e140be8c0f6b5b8,PodSandboxId:e918007929d35b5cb8e3dc0752427803b8496d458837ae4795f9a999b104b7c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766112538553240558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vbbbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6293c621-6771-4f49-82da-b9855c06b18c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41e37e14f858d2395248ad7cc9a1110682e9292d64ed829cdfdb3235af49eb4,PodSandboxId:a92b49d24460aa320c25e155d6caf6b08b0235c1544469c7b8221a11f487c3b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_
RUNNING,CreatedAt:1766112538232052310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfp8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5822977e-9a2f-4ca6-8cb6-26f161c23791,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b9d4f0960c05f3f429e70c4f2fe9f0011729c50b33da6bd7b3b81b3de517ab3,PodSandboxId:70ef0261fbcedcbc5ae3230cf923a4dfffb20600a7e4151d688f1fb654ccf238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:176611
2538192380480,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd92a8c6-e659-4184-bcbd-43da477075c7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6831dc2bcef8d4b1a94250aed36dd9fb44a05ea80016e3768e48123a8e1f6c,PodSandboxId:35cb157095476b02b0abeaf08d488668f96c9e0d4728acd3994b964c3700b468,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766112534452951956,Lab
els:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b4be683f1b2146cda99455d27828d0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7a42f7becb9ee6e394e4711ce0d939c670b9b86111d9eacaf91742d4b61876,PodSandboxId:d02d27fb02ddf4040b24d90914f56b9a7e2f13861d573d1b8756bef024538549,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13b
ccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766112534422616995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 711dafb450914341c18eb7924278c787,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ae4a05d12afd9d8d285abf18d51893a511feb65943a48d1182033fbca6b790,PodSandboxId:e27f37b5433ef140fc0770f3aa8fad8c203db6e4c12219c4ecf0d9f7e70d6d3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6f
f430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766112534406923517,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c78855ae3727af10a88609c41e957243,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae201f753e2fd25d72bcc3e9cb3882717e0a242162a84fba4346388878190fb2,PodSandboxId:f9bc0a0f70474fa6cc1046f5edb3fb37d910306f5e376c522bec7f7178b89806,Metadata:&ContainerMetadata{Name:kube-controller-mana
ger,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766112534388772204,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24264402a1697b7f5edbdf8e719b1ae,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7791700eee60709360756b6457c1f39022bd70f852043f8fc410912b7
eac51,PodSandboxId:96419d7b2a3c490726e8415d3406d3339ff543893b9fcf82f96df0c362123b09,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1766112496013321509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vbbbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6293c621-6771-4f49-82da-b9855c06b18c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readine
ss-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6d164603577eba196a03571377b050885615eb2c1f5bc8cd2b3648f7365509,PodSandboxId:9589988aeaee6b32374ef45e84566e546064d358ea05730fe94ccb2632816218,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1766112495721931029,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfp8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5822977e-9a2f-4ca6-8cb6-26f161c23791,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c562b08e9b8503c78d017881ca66b665a54239ea96beb94ed77195ee8e7043,PodSandboxId:ace9301814f0ffaaec2e3bacca3662ed1afc226f3f5ce4e68dcb977e524983ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766112495709625548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd92a8c6-e659-4184-bcbd-43da477075c7,},Annotations:map[string]string{io.kubernetes.container.hash:
6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbc3f93240053c62c6dd367d02faccdf3daa42629940378a9c34a13fcb58be8,PodSandboxId:c6fec3d2091daa0779891e3a982fd27bf9b549e96b7343ef851e6090bd001aaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1766112491916767855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b4be683f1b2146cda99455d27828d0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.k
ubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30141c2d08a9d0bc12975b94953d7e149695bd82c65e81992aea6fcea340cc,PodSandboxId:3032bc4eef53c29fcaa42cae1398d811952a3c3289a6e573a2d367833dae7cd2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1766112491863558805,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-936345,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: e24264402a1697b7f5edbdf8e719b1ae,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baa578ff64f7266f8aec4a6a718785e4c57c078f37fb762460df66a5ae091dc9,PodSandboxId:d38d8b5ae41c4328ced9df76847a7d97a43a0851e2cc16e31fd4bc1c4e24c235,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1766112491815245134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-functional-936345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c78855ae3727af10a88609c41e957243,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=234d027f-b752-4e7d-a608-df7e7a02c8f3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	cdad69b8660e7       04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5                                              About a minute ago   Running             myfrontend                0                   0db09e281a967       sp-pod                                      default
	a42f49a7d3fd0       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   4 minutes ago        Running             mysql                     0                   12d8e17e28856       mysql-7d7b65bc95-c6dqh                      default
	6e1a5f3eb5f52       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           8 minutes ago        Exited              mount-munger              0                   12bacd69cf538       busybox-mount                               default
	91e758f1aacca       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              10 minutes ago       Running             coredns                   2                   e918007929d35       coredns-7d764666f9-vbbbl                    kube-system
	f41e37e14f858       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                              10 minutes ago       Running             kube-proxy                2                   a92b49d24460a       kube-proxy-lfp8r                            kube-system
	9b9d4f0960c05       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              10 minutes ago       Running             storage-provisioner       2                   70ef0261fbced       storage-provisioner                         kube-system
	1a6831dc2bcef       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                              10 minutes ago       Running             kube-scheduler            2                   35cb157095476       kube-scheduler-functional-936345            kube-system
	9f7a42f7becb9       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                              10 minutes ago       Running             kube-apiserver            0                   d02d27fb02ddf       kube-apiserver-functional-936345            kube-system
	d4ae4a05d12af       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                              10 minutes ago       Running             etcd                      2                   e27f37b5433ef       etcd-functional-936345                      kube-system
	ae201f753e2fd       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                              10 minutes ago       Running             kube-controller-manager   2                   f9bc0a0f70474       kube-controller-manager-functional-936345   kube-system
	3a7791700eee6       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              11 minutes ago       Exited              coredns                   1                   96419d7b2a3c4       coredns-7d764666f9-vbbbl                    kube-system
	cf6d164603577       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                              11 minutes ago       Exited              kube-proxy                1                   9589988aeaee6       kube-proxy-lfp8r                            kube-system
	55c562b08e9b8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              11 minutes ago       Exited              storage-provisioner       1                   ace9301814f0f       storage-provisioner                         kube-system
	5cbc3f9324005       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                              11 minutes ago       Exited              kube-scheduler            1                   c6fec3d2091da       kube-scheduler-functional-936345            kube-system
	0c30141c2d08a       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                              11 minutes ago       Exited              kube-controller-manager   1                   3032bc4eef53c       kube-controller-manager-functional-936345   kube-system
	baa578ff64f72       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                              11 minutes ago       Exited              etcd                      1                   d38d8b5ae41c4       etcd-functional-936345                      kube-system
	
	
	==> coredns [3a7791700eee60709360756b6457c1f39022bd70f852043f8fc410912b7eac51] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:45323 - 19793 "HINFO IN 4612485286517945921.2260184995115770894. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030459974s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [91e758f1aacca078e5d5e4ade2daaf122c267a8da1977c176e140be8c0f6b5b8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:41838 - 52153 "HINFO IN 4280416885053973904.1603704149913353446. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024671331s
	
	
	==> describe nodes <==
	Name:               functional-936345
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-936345
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=functional-936345
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T02_47_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 02:47:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-936345
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 02:59:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 02:58:29 +0000   Fri, 19 Dec 2025 02:47:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 02:58:29 +0000   Fri, 19 Dec 2025 02:47:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 02:58:29 +0000   Fri, 19 Dec 2025 02:47:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 02:58:29 +0000   Fri, 19 Dec 2025 02:47:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    functional-936345
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8b3017e61d64166941fb065e0138897
	  System UUID:                a8b3017e-61d6-4166-941f-b065e0138897
	  Boot ID:                    abf60ed3-98d5-481b-b193-2b3182ae8fc7
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-sxs2v                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-9f67c86d4-wg5l4                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-7d7b65bc95-c6dqh                                   600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    8m41s
	  default                     sp-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 coredns-7d764666f9-vbbbl                                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-936345                                   100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-936345                         250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-936345                200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-lfp8r                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-936345                         100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        kubernetes-dashboard-api-6f6bc9c789-nhrc9                100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    9m56s
	  kubernetes-dashboard        kubernetes-dashboard-auth-7d44b44fcf-wk8mz               100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    9m56s
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-tkd7b               0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj    100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    9m56s
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-gldgt                100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    9m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1750m (87%)   1700m (85%)
	  memory             1482Mi (37%)  2470Mi (63%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  12m   node-controller  Node functional-936345 event: Registered Node functional-936345 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node functional-936345 event: Registered Node functional-936345 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-936345 event: Registered Node functional-936345 in Controller
	
	
	==> dmesg <==
	[  +0.002152] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.177929] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089908] kauditd_printk_skb: 1 callbacks suppressed
	[Dec19 02:47] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.120827] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.207935] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.030152] kauditd_printk_skb: 236 callbacks suppressed
	[ +35.965741] kauditd_printk_skb: 45 callbacks suppressed
	[Dec19 02:48] kauditd_printk_skb: 242 callbacks suppressed
	[  +8.572675] kauditd_printk_skb: 131 callbacks suppressed
	[  +0.105882] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.902513] kauditd_printk_skb: 78 callbacks suppressed
	[  +4.545680] kauditd_printk_skb: 170 callbacks suppressed
	[Dec19 02:49] kauditd_printk_skb: 131 callbacks suppressed
	[  +0.662586] kauditd_printk_skb: 169 callbacks suppressed
	[  +3.856516] kauditd_printk_skb: 32 callbacks suppressed
	[ +26.918937] kauditd_printk_skb: 182 callbacks suppressed
	[Dec19 02:50] kauditd_printk_skb: 29 callbacks suppressed
	[Dec19 02:51] kauditd_printk_skb: 38 callbacks suppressed
	[Dec19 02:55] kauditd_printk_skb: 26 callbacks suppressed
	[ +15.436510] crun[10057]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.004393] kauditd_printk_skb: 11 callbacks suppressed
	[Dec19 02:58] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [baa578ff64f7266f8aec4a6a718785e4c57c078f37fb762460df66a5ae091dc9] <==
	{"level":"info","ts":"2025-12-19T02:48:13.154917Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T02:48:13.155032Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T02:48:13.155088Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T02:48:13.156237Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:48:13.157430Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:48:13.158447Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T02:48:13.158630Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.80:2379"}
	{"level":"info","ts":"2025-12-19T02:48:40.232114Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-19T02:48:40.232178Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-936345","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.80:2380"],"advertise-client-urls":["https://192.168.39.80:2379"]}
	{"level":"error","ts":"2025-12-19T02:48:40.232249Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:48:40.319148Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:48:40.319221Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:48:40.319251Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d33e7f1dba1e46ae","current-leader-member-id":"d33e7f1dba1e46ae"}
	{"level":"info","ts":"2025-12-19T02:48:40.319322Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-19T02:48:40.319353Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-19T02:48:40.319346Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:48:40.319584Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T02:48:40.319613Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-19T02:48:40.319428Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.80:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:48:40.319625Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.80:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T02:48:40.319631Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.80:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:48:40.322729Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.80:2380"}
	{"level":"error","ts":"2025-12-19T02:48:40.322871Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.80:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:48:40.322895Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.80:2380"}
	{"level":"info","ts":"2025-12-19T02:48:40.322995Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-936345","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.80:2380"],"advertise-client-urls":["https://192.168.39.80:2379"]}
	
	
	==> etcd [d4ae4a05d12afd9d8d285abf18d51893a511feb65943a48d1182033fbca6b790] <==
	{"level":"warn","ts":"2025-12-19T02:55:12.707014Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"201.529488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:55:12.707026Z","caller":"traceutil/trace.go:172","msg":"trace[855489840] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1265; }","duration":"201.54195ms","start":"2025-12-19T02:55:12.505480Z","end":"2025-12-19T02:55:12.707022Z","steps":["trace[855489840] 'agreement among raft nodes before linearized reading'  (duration: 201.520551ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:55:12.707203Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"280.731825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" limit:10000 revision:1261 ","response":"range_response_count:24 size:23124"}
	{"level":"info","ts":"2025-12-19T02:55:12.707218Z","caller":"traceutil/trace.go:172","msg":"trace[125474017] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:24; response_revision:1265; }","duration":"280.74875ms","start":"2025-12-19T02:55:12.426464Z","end":"2025-12-19T02:55:12.707213Z","steps":["trace[125474017] 'agreement among raft nodes before linearized reading'  (duration: 280.636815ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:55:12.703736Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"756.785302ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:55:12.707634Z","caller":"traceutil/trace.go:172","msg":"trace[1182295487] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations; range_end:; response_count:0; response_revision:1265; }","duration":"760.724412ms","start":"2025-12-19T02:55:11.946893Z","end":"2025-12-19T02:55:12.707617Z","steps":["trace[1182295487] 'agreement among raft nodes before linearized reading'  (duration: 756.757148ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:55:12.711824Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T02:55:11.946883Z","time spent":"764.928727ms","remote":"127.0.0.1:55912","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":29,"request content":"key:\"/registry/mutatingwebhookconfigurations\" limit:1 "}
	{"level":"info","ts":"2025-12-19T02:55:14.390910Z","caller":"traceutil/trace.go:172","msg":"trace[3041062] transaction","detail":"{read_only:false; response_revision:1266; number_of_response:1; }","duration":"246.07515ms","start":"2025-12-19T02:55:14.144821Z","end":"2025-12-19T02:55:14.390896Z","steps":["trace[3041062] 'process raft request'  (duration: 245.950109ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:55:14.609653Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.324869ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:55:14.610274Z","caller":"traceutil/trace.go:172","msg":"trace[1768900517] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1266; }","duration":"105.950409ms","start":"2025-12-19T02:55:14.504310Z","end":"2025-12-19T02:55:14.610261Z","steps":["trace[1768900517] 'range keys from in-memory index tree'  (duration: 105.240993ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:55:16.450256Z","caller":"traceutil/trace.go:172","msg":"trace[966446445] transaction","detail":"{read_only:false; response_revision:1275; number_of_response:1; }","duration":"181.345423ms","start":"2025-12-19T02:55:16.268898Z","end":"2025-12-19T02:55:16.450244Z","steps":["trace[966446445] 'process raft request'  (duration: 181.268156ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:55:21.193679Z","caller":"traceutil/trace.go:172","msg":"trace[1311821342] linearizableReadLoop","detail":"{readStateIndex:1441; appliedIndex:1441; }","duration":"369.452768ms","start":"2025-12-19T02:55:20.824211Z","end":"2025-12-19T02:55:21.193664Z","steps":["trace[1311821342] 'read index received'  (duration: 369.446563ms)","trace[1311821342] 'applied index is now lower than readState.Index'  (duration: 5.439µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T02:55:21.195924Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"371.183497ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:55:21.195982Z","caller":"traceutil/trace.go:172","msg":"trace[1789980356] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1282; }","duration":"371.765532ms","start":"2025-12-19T02:55:20.824207Z","end":"2025-12-19T02:55:21.195973Z","steps":["trace[1789980356] 'agreement among raft nodes before linearized reading'  (duration: 369.55683ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:55:21.196011Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T02:55:20.824189Z","time spent":"371.814768ms","remote":"127.0.0.1:54954","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-12-19T02:55:21.196303Z","caller":"traceutil/trace.go:172","msg":"trace[1231806994] transaction","detail":"{read_only:false; response_revision:1283; number_of_response:1; }","duration":"425.247821ms","start":"2025-12-19T02:55:20.771043Z","end":"2025-12-19T02:55:21.196291Z","steps":["trace[1231806994] 'process raft request'  (duration: 423.225299ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:55:21.196404Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T02:55:20.771030Z","time spent":"425.33305ms","remote":"127.0.0.1:55264","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1282 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-19T02:55:21.196997Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.357483ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" limit:10000 revision:1271 ","response":"range_response_count:2 size:1261"}
	{"level":"info","ts":"2025-12-19T02:55:21.197097Z","caller":"traceutil/trace.go:172","msg":"trace[2049198063] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:2; response_revision:1283; }","duration":"184.458302ms","start":"2025-12-19T02:55:21.012629Z","end":"2025-12-19T02:55:21.197088Z","steps":["trace[2049198063] 'agreement among raft nodes before linearized reading'  (duration: 184.227284ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:55:21.198001Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"270.415857ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:55:21.198487Z","caller":"traceutil/trace.go:172","msg":"trace[228398858] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1283; }","duration":"273.252053ms","start":"2025-12-19T02:55:20.925216Z","end":"2025-12-19T02:55:21.198468Z","steps":["trace[228398858] 'agreement among raft nodes before linearized reading'  (duration: 269.735424ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:58:30.215475Z","caller":"traceutil/trace.go:172","msg":"trace[387749820] transaction","detail":"{read_only:false; response_revision:1492; number_of_response:1; }","duration":"100.226402ms","start":"2025-12-19T02:58:30.115225Z","end":"2025-12-19T02:58:30.215451Z","steps":["trace[387749820] 'process raft request'  (duration: 100.132366ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:58:55.721418Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1182}
	{"level":"info","ts":"2025-12-19T02:58:55.745906Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1182,"took":"24.158928ms","hash":773607096,"current-db-size-bytes":4542464,"current-db-size":"4.5 MB","current-db-size-in-use-bytes":2179072,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-12-19T02:58:55.745941Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":773607096,"revision":1182,"compact-revision":-1}
	
	
	==> kernel <==
	 02:59:20 up 12 min,  0 users,  load average: 0.37, 0.23, 0.16
	Linux functional-936345 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [9f7a42f7becb9ee6e394e4711ce0d939c670b9b86111d9eacaf91742d4b61876] <==
	I1219 02:49:24.874735       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.99.194.156"}
	I1219 02:49:24.883995       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.105.202.148"}
	I1219 02:49:24.894415       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.171.21"}
	I1219 02:49:24.905243       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.97.127.37"}
	I1219 02:49:24.915585       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.106.84.86"}
	W1219 02:49:30.118472       1 logging.go:55] [core] [Channel #262 SubChannel #263]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.137352       1 logging.go:55] [core] [Channel #266 SubChannel #267]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.148888       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 02:49:30.164574       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.186134       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.222331       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.249253       1 logging.go:55] [core] [Channel #286 SubChannel #287]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.265975       1 logging.go:55] [core] [Channel #290 SubChannel #291]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.275854       1 logging.go:55] [core] [Channel #294 SubChannel #295]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.285561       1 logging.go:55] [core] [Channel #298 SubChannel #299]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 02:49:30.297257       1 logging.go:55] [core] [Channel #302 SubChannel #303]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:49:30.312093       1 logging.go:55] [core] [Channel #306 SubChannel #307]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1219 02:50:39.580243       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.214.188"}
	E1219 02:55:21.817314       1 conn.go:339] Error on socket receive: read tcp 192.168.39.80:8441->192.168.39.1:37272: use of closed network connection
	E1219 02:55:23.177540       1 conn.go:339] Error on socket receive: read tcp 192.168.39.80:8441->192.168.39.1:37286: use of closed network connection
	E1219 02:55:24.319551       1 conn.go:339] Error on socket receive: read tcp 192.168.39.80:8441->192.168.39.1:37310: use of closed network connection
	E1219 02:55:27.275536       1 conn.go:339] Error on socket receive: read tcp 192.168.39.80:8441->192.168.39.1:37336: use of closed network connection
	E1219 02:58:16.044926       1 conn.go:339] Error on socket receive: read tcp 192.168.39.80:8441->192.168.39.1:52204: use of closed network connection
	E1219 02:58:23.411174       1 conn.go:339] Error on socket receive: read tcp 192.168.39.80:8441->192.168.39.1:38106: use of closed network connection
	I1219 02:58:56.916478       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [0c30141c2d08a9d0bc12975b94953d7e149695bd82c65e81992aea6fcea340cc] <==
	I1219 02:48:17.540973       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.541104       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.541169       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.541134       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.541644       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.541838       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.541906       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.542026       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.542446       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.542474       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.542991       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.543035       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.543522       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.543645       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.544248       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.545422       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.550778       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.550872       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.550905       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 02:48:17.550910       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 02:48:17.550815       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.550838       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.550844       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.566419       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:17.620951       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [ae201f753e2fd25d72bcc3e9cb3882717e0a242162a84fba4346388878190fb2] <==
	I1219 02:49:00.081729       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:00.082216       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:00.083156       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:00.083357       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:00.078579       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:00.090659       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:49:00.107490       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:00.178934       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:00.178949       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 02:49:00.178953       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 02:49:00.191557       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:00.534378       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1219 02:49:30.113401       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongcustomentities.configuration.konghq.com"
	I1219 02:49:30.113472       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongconsumers.configuration.konghq.com"
	I1219 02:49:30.113504       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongingresses.configuration.konghq.com"
	I1219 02:49:30.113529       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongupstreampolicies.configuration.konghq.com"
	I1219 02:49:30.113560       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="ingressclassparameterses.configuration.konghq.com"
	I1219 02:49:30.113588       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongplugins.configuration.konghq.com"
	I1219 02:49:30.113619       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="tcpingresses.configuration.konghq.com"
	I1219 02:49:30.113641       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongconsumergroups.configuration.konghq.com"
	I1219 02:49:30.113678       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="udpingresses.configuration.konghq.com"
	I1219 02:49:30.113842       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:49:30.209012       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:49:31.314654       1 shared_informer.go:377] "Caches are synced"
	I1219 02:49:31.412458       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [cf6d164603577eba196a03571377b050885615eb2c1f5bc8cd2b3648f7365509] <==
	I1219 02:48:16.089375       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:48:16.190391       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:16.190434       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.80"]
	E1219 02:48:16.190534       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:48:16.235930       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 02:48:16.235982       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 02:48:16.236003       1 server_linux.go:136] "Using iptables Proxier"
	I1219 02:48:16.249667       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:48:16.250956       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 02:48:16.250969       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:48:16.256445       1 config.go:200] "Starting service config controller"
	I1219 02:48:16.256456       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:48:16.256471       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:48:16.256475       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:48:16.256485       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:48:16.256488       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:48:16.259898       1 config.go:309] "Starting node config controller"
	I1219 02:48:16.259923       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:48:16.356895       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:48:16.356971       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 02:48:16.357171       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:48:16.360722       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [f41e37e14f858d2395248ad7cc9a1110682e9292d64ed829cdfdb3235af49eb4] <==
	I1219 02:48:58.624323       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:48:58.724726       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:58.724765       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.80"]
	E1219 02:48:58.724874       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:48:58.780747       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 02:48:58.780861       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 02:48:58.780882       1 server_linux.go:136] "Using iptables Proxier"
	I1219 02:48:58.789273       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:48:58.790060       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 02:48:58.790086       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:48:58.792528       1 config.go:200] "Starting service config controller"
	I1219 02:48:58.797870       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:48:58.798070       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:48:58.798097       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:48:58.798122       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:48:58.798141       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:48:58.800639       1 config.go:309] "Starting node config controller"
	I1219 02:48:58.800680       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:48:58.800697       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:48:58.904091       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:48:58.909263       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:48:58.913162       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1a6831dc2bcef8d4b1a94250aed36dd9fb44a05ea80016e3768e48123a8e1f6c] <==
	I1219 02:48:54.971108       1 serving.go:386] Generated self-signed cert in-memory
	W1219 02:48:56.866286       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 02:48:56.866475       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 02:48:56.866502       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 02:48:56.866579       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 02:48:56.920033       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 02:48:56.920064       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:48:56.932869       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:48:56.932929       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 02:48:56.933040       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 02:48:56.932961       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:48:57.033570       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [5cbc3f93240053c62c6dd367d02faccdf3daa42629940378a9c34a13fcb58be8] <==
	W1219 02:48:14.280113       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 02:48:14.280123       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 02:48:14.280129       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 02:48:14.333945       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 02:48:14.334014       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:48:14.342056       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 02:48:14.345098       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:48:14.345131       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:48:14.345152       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1219 02:48:14.426333       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1219 02:48:14.427239       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1219 02:48:14.427692       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1219 02:48:14.427758       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1219 02:48:14.427827       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1219 02:48:14.428293       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1219 02:48:14.428340       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1219 02:48:14.428441       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1219 02:48:14.437630       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	I1219 02:48:14.446950       1 shared_informer.go:377] "Caches are synced"
	I1219 02:48:40.240013       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1219 02:48:40.240095       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1219 02:48:40.240115       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1219 02:48:40.240278       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:48:40.240641       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1219 02:48:40.240657       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 19 02:58:36 functional-936345 kubelet[5589]: E1219 02:58:36.703678    5589 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vbbbl" containerName="coredns"
	Dec 19 02:58:44 functional-936345 kubelet[5589]: E1219 02:58:44.006493    5589 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766113124005274994  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242160}  inodes_used:{value:113}}"
	Dec 19 02:58:44 functional-936345 kubelet[5589]: E1219 02:58:44.006728    5589 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766113124005274994  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242160}  inodes_used:{value:113}}"
	Dec 19 02:58:44 functional-936345 kubelet[5589]: E1219 02:58:44.704107    5589 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-936345" containerName="kube-controller-manager"
	Dec 19 02:58:53 functional-936345 kubelet[5589]: E1219 02:58:53.799119    5589 manager.go:1119] Failed to create existing container: /kubepods/burstable/podc78855ae3727af10a88609c41e957243/crio-d38d8b5ae41c4328ced9df76847a7d97a43a0851e2cc16e31fd4bc1c4e24c235: Error finding container d38d8b5ae41c4328ced9df76847a7d97a43a0851e2cc16e31fd4bc1c4e24c235: Status 404 returned error can't find the container with id d38d8b5ae41c4328ced9df76847a7d97a43a0851e2cc16e31fd4bc1c4e24c235
	Dec 19 02:58:53 functional-936345 kubelet[5589]: E1219 02:58:53.799400    5589 manager.go:1119] Failed to create existing container: /kubepods/burstable/podf2b4be683f1b2146cda99455d27828d0/crio-c6fec3d2091daa0779891e3a982fd27bf9b549e96b7343ef851e6090bd001aaa: Error finding container c6fec3d2091daa0779891e3a982fd27bf9b549e96b7343ef851e6090bd001aaa: Status 404 returned error can't find the container with id c6fec3d2091daa0779891e3a982fd27bf9b549e96b7343ef851e6090bd001aaa
	Dec 19 02:58:53 functional-936345 kubelet[5589]: E1219 02:58:53.799695    5589 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod6293c621-6771-4f49-82da-b9855c06b18c/crio-96419d7b2a3c490726e8415d3406d3339ff543893b9fcf82f96df0c362123b09: Error finding container 96419d7b2a3c490726e8415d3406d3339ff543893b9fcf82f96df0c362123b09: Status 404 returned error can't find the container with id 96419d7b2a3c490726e8415d3406d3339ff543893b9fcf82f96df0c362123b09
	Dec 19 02:58:53 functional-936345 kubelet[5589]: E1219 02:58:53.800101    5589 manager.go:1119] Failed to create existing container: /kubepods/besteffort/podcd92a8c6-e659-4184-bcbd-43da477075c7/crio-ace9301814f0ffaaec2e3bacca3662ed1afc226f3f5ce4e68dcb977e524983ae: Error finding container ace9301814f0ffaaec2e3bacca3662ed1afc226f3f5ce4e68dcb977e524983ae: Status 404 returned error can't find the container with id ace9301814f0ffaaec2e3bacca3662ed1afc226f3f5ce4e68dcb977e524983ae
	Dec 19 02:58:53 functional-936345 kubelet[5589]: E1219 02:58:53.800435    5589 manager.go:1119] Failed to create existing container: /kubepods/burstable/pode24264402a1697b7f5edbdf8e719b1ae/crio-3032bc4eef53c29fcaa42cae1398d811952a3c3289a6e573a2d367833dae7cd2: Error finding container 3032bc4eef53c29fcaa42cae1398d811952a3c3289a6e573a2d367833dae7cd2: Status 404 returned error can't find the container with id 3032bc4eef53c29fcaa42cae1398d811952a3c3289a6e573a2d367833dae7cd2
	Dec 19 02:58:53 functional-936345 kubelet[5589]: E1219 02:58:53.800650    5589 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod5822977e-9a2f-4ca6-8cb6-26f161c23791/crio-9589988aeaee6b32374ef45e84566e546064d358ea05730fe94ccb2632816218: Error finding container 9589988aeaee6b32374ef45e84566e546064d358ea05730fe94ccb2632816218: Status 404 returned error can't find the container with id 9589988aeaee6b32374ef45e84566e546064d358ea05730fe94ccb2632816218
	Dec 19 02:58:54 functional-936345 kubelet[5589]: E1219 02:58:54.008649    5589 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766113134008329829  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242160}  inodes_used:{value:113}}"
	Dec 19 02:58:54 functional-936345 kubelet[5589]: E1219 02:58:54.008667    5589 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766113134008329829  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242160}  inodes_used:{value:113}}"
	Dec 19 02:58:56 functional-936345 kubelet[5589]: E1219 02:58:56.043200    5589 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 19 02:58:56 functional-936345 kubelet[5589]: E1219 02:58:56.043266    5589 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 19 02:58:56 functional-936345 kubelet[5589]: E1219 02:58:56.043474    5589 kuberuntime_manager.go:1664] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-9f67c86d4-wg5l4_default(6ffd5b41-1163-470d-8eca-617ef27bb37b): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 19 02:58:56 functional-936345 kubelet[5589]: E1219 02:58:56.043510    5589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-wg5l4" podUID="6ffd5b41-1163-470d-8eca-617ef27bb37b"
	Dec 19 02:58:56 functional-936345 kubelet[5589]: E1219 02:58:56.703976    5589 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-936345" containerName="kube-apiserver"
	Dec 19 02:59:00 functional-936345 kubelet[5589]: E1219 02:59:00.703656    5589 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-936345" containerName="kube-scheduler"
	Dec 19 02:59:04 functional-936345 kubelet[5589]: E1219 02:59:04.010147    5589 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766113144009751244  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242160}  inodes_used:{value:113}}"
	Dec 19 02:59:04 functional-936345 kubelet[5589]: E1219 02:59:04.010167    5589 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766113144009751244  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242160}  inodes_used:{value:113}}"
	Dec 19 02:59:07 functional-936345 kubelet[5589]: E1219 02:59:07.704619    5589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-wg5l4" podUID="6ffd5b41-1163-470d-8eca-617ef27bb37b"
	Dec 19 02:59:10 functional-936345 kubelet[5589]: E1219 02:59:10.703343    5589 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-936345" containerName="etcd"
	Dec 19 02:59:14 functional-936345 kubelet[5589]: E1219 02:59:14.012193    5589 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766113154011840768  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242160}  inodes_used:{value:113}}"
	Dec 19 02:59:14 functional-936345 kubelet[5589]: E1219 02:59:14.012310    5589 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766113154011840768  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242160}  inodes_used:{value:113}}"
	Dec 19 02:59:18 functional-936345 kubelet[5589]: E1219 02:59:18.704301    5589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-wg5l4" podUID="6ffd5b41-1163-470d-8eca-617ef27bb37b"
	
	
	==> storage-provisioner [55c562b08e9b8503c78d017881ca66b665a54239ea96beb94ed77195ee8e7043] <==
	I1219 02:48:15.858877       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 02:48:15.888980       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 02:48:15.889024       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1219 02:48:15.896113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:19.353731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:23.614534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:27.212955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:30.266366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:33.288567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:33.299877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 02:48:33.300148       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 02:48:33.300417       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-936345_fc50c04d-f684-4500-8968-a4dea833c4c5!
	I1219 02:48:33.300674       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85c46f16-8e78-4eed-b914-486f44a7c906", APIVersion:"v1", ResourceVersion:"544", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-936345_fc50c04d-f684-4500-8968-a4dea833c4c5 became leader
	W1219 02:48:33.306363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:33.316199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 02:48:33.402132       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-936345_fc50c04d-f684-4500-8968-a4dea833c4c5!
	W1219 02:48:35.319115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:35.333563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:37.337913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:37.343915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:39.347257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:48:39.354910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9b9d4f0960c05f3f429e70c4f2fe9f0011729c50b33da6bd7b3b81b3de517ab3] <==
	W1219 02:58:56.354750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:58:58.357322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:58:58.364827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:00.368551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:00.376585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:02.379731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:02.386463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:04.391284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:04.396307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:06.399830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:06.408123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:08.411382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:08.416081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:10.418891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:10.423976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:12.427209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:12.431594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:14.437348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:14.444882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:16.448367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:16.453775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:18.457302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:18.465754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:20.472979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:59:20.479597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-936345 -n functional-936345
helpers_test.go:270: (dbg) Run:  kubectl --context functional-936345 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-sxs2v hello-node-connect-9f67c86d4-wg5l4 kubernetes-dashboard-api-6f6bc9c789-nhrc9 kubernetes-dashboard-auth-7d44b44fcf-wk8mz kubernetes-dashboard-kong-78b7499b45-tkd7b kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj kubernetes-dashboard-web-7f7574785f-gldgt
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-936345 describe pod busybox-mount hello-node-5758569b79-sxs2v hello-node-connect-9f67c86d4-wg5l4 kubernetes-dashboard-api-6f6bc9c789-nhrc9 kubernetes-dashboard-auth-7d44b44fcf-wk8mz kubernetes-dashboard-kong-78b7499b45-tkd7b kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj kubernetes-dashboard-web-7f7574785f-gldgt
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-936345 describe pod busybox-mount hello-node-5758569b79-sxs2v hello-node-connect-9f67c86d4-wg5l4 kubernetes-dashboard-api-6f6bc9c789-nhrc9 kubernetes-dashboard-auth-7d44b44fcf-wk8mz kubernetes-dashboard-kong-78b7499b45-tkd7b kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj kubernetes-dashboard-web-7f7574785f-gldgt: exit status 1 (86.237011ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-936345/192.168.39.80
	Start Time:       Fri, 19 Dec 2025 02:49:20 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://6e1a5f3eb5f526298bd503d941552876969e9e0556abcf372f10b81ea35a6035
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 19 Dec 2025 02:50:26 +0000
	      Finished:     Fri, 19 Dec 2025 02:50:26 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pq8s9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-pq8s9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  10m    default-scheduler  Successfully assigned default/busybox-mount to functional-936345
	  Normal  Pulling    9m59s  kubelet            spec.containers{mount-munger}: Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     8m54s  kubelet            spec.containers{mount-munger}: Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.991s (1m5.047s including waiting). Image size: 4631262 bytes.
	  Normal  Created    8m54s  kubelet            spec.containers{mount-munger}: Container created
	  Normal  Started    8m54s  kubelet            spec.containers{mount-munger}: Container started
	
	
	Name:             hello-node-5758569b79-sxs2v
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-936345/192.168.39.80
	Start Time:       Fri, 19 Dec 2025 02:49:18 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ddmsm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ddmsm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-5758569b79-sxs2v to functional-936345
	  Warning  Failed     8m57s                  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m20s (x2 over 8m57s)  kubelet            spec.containers{echo-server}: Error: ErrImagePull
	  Warning  Failed     4m20s                  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4m5s (x2 over 8m57s)   kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m5s (x2 over 8m57s)   kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
	  Normal   Pulling    3m54s (x3 over 10m)    kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-wg5l4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-936345/192.168.39.80
	Start Time:       Fri, 19 Dec 2025 02:49:18 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-46q8d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-46q8d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-wg5l4 to functional-936345
	  Warning  Failed     9m29s                kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m7s                 kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m44s (x3 over 10m)  kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
	  Warning  Failed     24s (x3 over 9m29s)  kubelet            spec.containers{echo-server}: Error: ErrImagePull
	  Warning  Failed     24s                  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2s (x4 over 9m28s)   kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2s (x4 over 9m28s)   kubelet            spec.containers{echo-server}: Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-api-6f6bc9c789-nhrc9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-auth-7d44b44fcf-wk8mz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-kong-78b7499b45-tkd7b" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-web-7f7574785f-gldgt" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-936345 describe pod busybox-mount hello-node-5758569b79-sxs2v hello-node-connect-9f67c86d4-wg5l4 kubernetes-dashboard-api-6f6bc9c789-nhrc9 kubernetes-dashboard-auth-7d44b44fcf-wk8mz kubernetes-dashboard-kong-78b7499b45-tkd7b kubernetes-dashboard-metrics-scraper-594bbfb84b-sb6xj kubernetes-dashboard-web-7f7574785f-gldgt: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (602.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (600.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-936345 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-936345 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-sxs2v" [d0ff39cd-f508-455e-9968-48efd92e3e42] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-936345 -n functional-936345
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-19 02:59:18.772226198 +0000 UTC m=+2061.810405511
functional_test.go:1460: (dbg) Run:  kubectl --context functional-936345 describe po hello-node-5758569b79-sxs2v -n default
functional_test.go:1460: (dbg) kubectl --context functional-936345 describe po hello-node-5758569b79-sxs2v -n default:
Name:             hello-node-5758569b79-sxs2v
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-936345/192.168.39.80
Start Time:       Fri, 19 Dec 2025 02:49:18 +0000
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ddmsm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ddmsm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-5758569b79-sxs2v to functional-936345
Warning  Failed     8m55s                  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m18s (x2 over 8m55s)  kubelet            spec.containers{echo-server}: Error: ErrImagePull
Warning  Failed     4m18s                  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    4m3s (x2 over 8m55s)   kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m3s (x2 over 8m55s)   kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
Normal   Pulling    3m52s (x3 over 9m59s)  kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-936345 logs hello-node-5758569b79-sxs2v -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-936345 logs hello-node-5758569b79-sxs2v -n default: exit status 1 (69.029714ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-5758569b79-sxs2v" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-936345 logs hello-node-5758569b79-sxs2v -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (600.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936345 service --namespace=default --https --url hello-node: exit status 115 (228.149331ms)

                                                
                                                
-- stdout --
	https://192.168.39.80:31087
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-936345 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936345 service hello-node --url --format={{.IP}}: exit status 115 (227.893713ms)

                                                
                                                
-- stdout --
	192.168.39.80
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-936345 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936345 service hello-node --url: exit status 115 (227.012744ms)

                                                
                                                
-- stdout --
	http://192.168.39.80:31087
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-936345 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.80:31087
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.23s)

                                                
                                    
x
+
TestPreload (146.77s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-751780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1219 03:37:21.561071    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:49.626090    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-751780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m30.817114858s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-751780 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-751780 image pull gcr.io/k8s-minikube/busybox: (3.534610524s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-751780
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-751780: (8.354144167s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-751780 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-751780 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (41.619006006s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-751780 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.3
	registry.k8s.io/kube-proxy:v1.34.3
	registry.k8s.io/kube-controller-manager:v1.34.3
	registry.k8s.io/kube-apiserver:v1.34.3
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-19 03:39:10.510902964 +0000 UTC m=+4453.549082277
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-751780 -n test-preload-751780
helpers_test.go:253: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-751780 logs -n 25
helpers_test.go:261: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-154692 ssh -n multinode-154692-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:26 UTC │ 19 Dec 25 03:26 UTC │
	│ ssh     │ multinode-154692 ssh -n multinode-154692 sudo cat /home/docker/cp-test_multinode-154692-m03_multinode-154692.txt                                          │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:26 UTC │ 19 Dec 25 03:26 UTC │
	│ cp      │ multinode-154692 cp multinode-154692-m03:/home/docker/cp-test.txt multinode-154692-m02:/home/docker/cp-test_multinode-154692-m03_multinode-154692-m02.txt │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:26 UTC │ 19 Dec 25 03:26 UTC │
	│ ssh     │ multinode-154692 ssh -n multinode-154692-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:26 UTC │ 19 Dec 25 03:26 UTC │
	│ ssh     │ multinode-154692 ssh -n multinode-154692-m02 sudo cat /home/docker/cp-test_multinode-154692-m03_multinode-154692-m02.txt                                  │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:26 UTC │ 19 Dec 25 03:26 UTC │
	│ node    │ multinode-154692 node stop m03                                                                                                                            │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:26 UTC │ 19 Dec 25 03:26 UTC │
	│ node    │ multinode-154692 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:26 UTC │ 19 Dec 25 03:27 UTC │
	│ node    │ list -p multinode-154692                                                                                                                                  │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:27 UTC │                     │
	│ stop    │ -p multinode-154692                                                                                                                                       │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:27 UTC │ 19 Dec 25 03:29 UTC │
	│ start   │ -p multinode-154692 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:29 UTC │ 19 Dec 25 03:31 UTC │
	│ node    │ list -p multinode-154692                                                                                                                                  │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:31 UTC │                     │
	│ node    │ multinode-154692 node delete m03                                                                                                                          │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:31 UTC │ 19 Dec 25 03:31 UTC │
	│ stop    │ multinode-154692 stop                                                                                                                                     │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:31 UTC │ 19 Dec 25 03:34 UTC │
	│ start   │ -p multinode-154692 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:36 UTC │
	│ node    │ list -p multinode-154692                                                                                                                                  │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │                     │
	│ start   │ -p multinode-154692-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-154692-m02 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │                     │
	│ start   │ -p multinode-154692-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-154692-m03 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:36 UTC │
	│ node    │ add -p multinode-154692                                                                                                                                   │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │                     │
	│ delete  │ -p multinode-154692-m03                                                                                                                                   │ multinode-154692-m03 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:36 UTC │
	│ delete  │ -p multinode-154692                                                                                                                                       │ multinode-154692     │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:36 UTC │
	│ start   │ -p test-preload-751780 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-751780  │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:38 UTC │
	│ image   │ test-preload-751780 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-751780  │ jenkins │ v1.37.0 │ 19 Dec 25 03:38 UTC │ 19 Dec 25 03:38 UTC │
	│ stop    │ -p test-preload-751780                                                                                                                                    │ test-preload-751780  │ jenkins │ v1.37.0 │ 19 Dec 25 03:38 UTC │ 19 Dec 25 03:38 UTC │
	│ start   │ -p test-preload-751780 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-751780  │ jenkins │ v1.37.0 │ 19 Dec 25 03:38 UTC │ 19 Dec 25 03:39 UTC │
	│ image   │ test-preload-751780 image list                                                                                                                            │ test-preload-751780  │ jenkins │ v1.37.0 │ 19 Dec 25 03:39 UTC │ 19 Dec 25 03:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:38:28
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:38:28.764137   38916 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:38:28.764366   38916 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:38:28.764373   38916 out.go:374] Setting ErrFile to fd 2...
	I1219 03:38:28.764377   38916 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:38:28.764545   38916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:38:28.764982   38916 out.go:368] Setting JSON to false
	I1219 03:38:28.765803   38916 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4853,"bootTime":1766110656,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:38:28.765863   38916 start.go:143] virtualization: kvm guest
	I1219 03:38:28.767614   38916 out.go:179] * [test-preload-751780] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:38:28.768650   38916 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:38:28.768665   38916 notify.go:221] Checking for updates...
	I1219 03:38:28.770696   38916 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:38:28.771721   38916 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:38:28.772709   38916 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 03:38:28.773755   38916 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:38:28.774796   38916 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:38:28.776162   38916 config.go:182] Loaded profile config "test-preload-751780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:38:28.776677   38916 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:38:28.808902   38916 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 03:38:28.809773   38916 start.go:309] selected driver: kvm2
	I1219 03:38:28.809787   38916 start.go:928] validating driver "kvm2" against &{Name:test-preload-751780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.3 ClusterName:test-preload-751780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:38:28.809896   38916 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:38:28.810753   38916 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:38:28.810787   38916 cni.go:84] Creating CNI manager for ""
	I1219 03:38:28.810856   38916 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:38:28.810913   38916 start.go:353] cluster config:
	{Name:test-preload-751780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:test-preload-751780 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:38:28.811013   38916 iso.go:125] acquiring lock: {Name:mk42290af04a74a4dddf27fa33aac85c9bdccfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:38:28.812746   38916 out.go:179] * Starting "test-preload-751780" primary control-plane node in "test-preload-751780" cluster
	I1219 03:38:28.813809   38916 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:38:28.813833   38916 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:38:28.813840   38916 cache.go:65] Caching tarball of preloaded images
	I1219 03:38:28.813904   38916 preload.go:238] Found /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:38:28.813914   38916 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:38:28.814007   38916 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/test-preload-751780/config.json ...
	I1219 03:38:28.814186   38916 start.go:360] acquireMachinesLock for test-preload-751780: {Name:mk229398d29442d4d52885bbac963e5004bfbfba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:38:28.814222   38916 start.go:364] duration metric: took 20.536µs to acquireMachinesLock for "test-preload-751780"
	I1219 03:38:28.814234   38916 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:38:28.814239   38916 fix.go:54] fixHost starting: 
	I1219 03:38:28.815665   38916 fix.go:112] recreateIfNeeded on test-preload-751780: state=Stopped err=<nil>
	W1219 03:38:28.815683   38916 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:38:28.816982   38916 out.go:252] * Restarting existing kvm2 VM for "test-preload-751780" ...
	I1219 03:38:28.817017   38916 main.go:144] libmachine: starting domain...
	I1219 03:38:28.817029   38916 main.go:144] libmachine: ensuring networks are active...
	I1219 03:38:28.817720   38916 main.go:144] libmachine: Ensuring network default is active
	I1219 03:38:28.818105   38916 main.go:144] libmachine: Ensuring network mk-test-preload-751780 is active
	I1219 03:38:28.818552   38916 main.go:144] libmachine: getting domain XML...
	I1219 03:38:28.819520   38916 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-751780</name>
	  <uuid>3785bfd2-5e6f-4118-9891-7ee91ed1ffe7</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/test-preload-751780/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/test-preload-751780/test-preload-751780.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:8d:3c:d7'/>
	      <source network='mk-test-preload-751780'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:f9:61:ed'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 03:38:30.040197   38916 main.go:144] libmachine: waiting for domain to start...
	I1219 03:38:30.041766   38916 main.go:144] libmachine: domain is now running
	I1219 03:38:30.041796   38916 main.go:144] libmachine: waiting for IP...
	I1219 03:38:30.042776   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:30.043375   38916 main.go:144] libmachine: domain test-preload-751780 has current primary IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:30.043392   38916 main.go:144] libmachine: found domain IP: 192.168.39.240
	I1219 03:38:30.043401   38916 main.go:144] libmachine: reserving static IP address...
	I1219 03:38:30.043857   38916 main.go:144] libmachine: found host DHCP lease matching {name: "test-preload-751780", mac: "52:54:00:8d:3c:d7", ip: "192.168.39.240"} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:37:00 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:30.043896   38916 main.go:144] libmachine: skip adding static IP to network mk-test-preload-751780 - found existing host DHCP lease matching {name: "test-preload-751780", mac: "52:54:00:8d:3c:d7", ip: "192.168.39.240"}
	I1219 03:38:30.043917   38916 main.go:144] libmachine: reserved static IP address 192.168.39.240 for domain test-preload-751780
	I1219 03:38:30.043931   38916 main.go:144] libmachine: waiting for SSH...
	I1219 03:38:30.043944   38916 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 03:38:30.046743   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:30.047172   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:37:00 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:30.047204   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:30.047393   38916 main.go:144] libmachine: Using SSH client type: native
	I1219 03:38:30.047629   38916 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I1219 03:38:30.047641   38916 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 03:38:33.133795   38916 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.39.240:22: connect: no route to host
	I1219 03:38:39.213825   38916 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.39.240:22: connect: no route to host
	I1219 03:38:42.316129   38916 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:38:42.319506   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:42.319910   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:38:39 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:42.319949   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:42.320150   38916 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/test-preload-751780/config.json ...
	I1219 03:38:42.320346   38916 machine.go:94] provisionDockerMachine start ...
	I1219 03:38:42.322480   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:42.322882   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:38:39 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:42.322922   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:42.323074   38916 main.go:144] libmachine: Using SSH client type: native
	I1219 03:38:42.323322   38916 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I1219 03:38:42.323334   38916 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:38:42.425946   38916 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 03:38:42.425976   38916 buildroot.go:166] provisioning hostname "test-preload-751780"
	I1219 03:38:42.428809   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:42.429177   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:38:39 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:42.429199   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:42.429360   38916 main.go:144] libmachine: Using SSH client type: native
	I1219 03:38:42.429556   38916 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I1219 03:38:42.429585   38916 main.go:144] libmachine: About to run SSH command:
	sudo hostname test-preload-751780 && echo "test-preload-751780" | sudo tee /etc/hostname
	I1219 03:38:42.552877   38916 main.go:144] libmachine: SSH cmd err, output: <nil>: test-preload-751780
	
	I1219 03:38:42.555861   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:42.556281   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:38:39 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:42.556319   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:42.556535   38916 main.go:144] libmachine: Using SSH client type: native
	I1219 03:38:42.556807   38916 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I1219 03:38:42.556827   38916 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-751780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-751780/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-751780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:38:42.670735   38916 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:38:42.670766   38916 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5010/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5010/.minikube}
	I1219 03:38:42.670832   38916 buildroot.go:174] setting up certificates
	I1219 03:38:42.670845   38916 provision.go:84] configureAuth start
	I1219 03:38:42.673794   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:42.674199   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:38:39 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:42.674221   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:42.676512   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:42.676855   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:38:39 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:42.676882   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:42.676988   38916 provision.go:143] copyHostCerts
	I1219 03:38:42.677056   38916 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem, removing ...
	I1219 03:38:42.677075   38916 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem
	I1219 03:38:42.677154   38916 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem (1082 bytes)
	I1219 03:38:42.677268   38916 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem, removing ...
	I1219 03:38:42.677280   38916 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem
	I1219 03:38:42.677322   38916 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem (1123 bytes)
	I1219 03:38:42.677414   38916 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem, removing ...
	I1219 03:38:42.677425   38916 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem
	I1219 03:38:42.677467   38916 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem (1679 bytes)
	I1219 03:38:42.677538   38916 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem org=jenkins.test-preload-751780 san=[127.0.0.1 192.168.39.240 localhost minikube test-preload-751780]
	I1219 03:38:42.736183   38916 provision.go:177] copyRemoteCerts
	I1219 03:38:42.736238   38916 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:38:42.738477   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:42.738793   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:38:39 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:42.738815   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:42.738988   38916 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/test-preload-751780/id_rsa Username:docker}
	I1219 03:38:42.821782   38916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 03:38:42.850978   38916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1219 03:38:42.877041   38916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:38:42.902893   38916 provision.go:87] duration metric: took 232.035725ms to configureAuth
	I1219 03:38:42.902913   38916 buildroot.go:189] setting minikube options for container-runtime
	I1219 03:38:42.903079   38916 config.go:182] Loaded profile config "test-preload-751780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:38:42.905524   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:42.905842   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:38:39 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:42.905862   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:42.905996   38916 main.go:144] libmachine: Using SSH client type: native
	I1219 03:38:42.906180   38916 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I1219 03:38:42.906193   38916 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:38:43.137283   38916 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:38:43.137320   38916 machine.go:97] duration metric: took 816.959714ms to provisionDockerMachine
	I1219 03:38:43.137334   38916 start.go:293] postStartSetup for "test-preload-751780" (driver="kvm2")
	I1219 03:38:43.137345   38916 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:38:43.137396   38916 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:38:43.140142   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:43.140625   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:38:39 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:43.140656   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:43.140814   38916 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/test-preload-751780/id_rsa Username:docker}
	I1219 03:38:43.219939   38916 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:38:43.224375   38916 info.go:137] Remote host: Buildroot 2025.02
	I1219 03:38:43.224400   38916 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/addons for local assets ...
	I1219 03:38:43.224473   38916 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/files for local assets ...
	I1219 03:38:43.224651   38916 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem -> 89372.pem in /etc/ssl/certs
	I1219 03:38:43.224738   38916 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:38:43.235062   38916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /etc/ssl/certs/89372.pem (1708 bytes)
	I1219 03:38:43.267026   38916 start.go:296] duration metric: took 129.679229ms for postStartSetup
	I1219 03:38:43.267067   38916 fix.go:56] duration metric: took 14.452825953s for fixHost
	I1219 03:38:43.269599   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:43.270036   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:38:39 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:43.270070   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:43.270272   38916 main.go:144] libmachine: Using SSH client type: native
	I1219 03:38:43.270545   38916 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I1219 03:38:43.270576   38916 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 03:38:43.369274   38916 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766115523.327433911
	
	I1219 03:38:43.369312   38916 fix.go:216] guest clock: 1766115523.327433911
	I1219 03:38:43.369321   38916 fix.go:229] Guest: 2025-12-19 03:38:43.327433911 +0000 UTC Remote: 2025-12-19 03:38:43.267073975 +0000 UTC m=+14.547691632 (delta=60.359936ms)
	I1219 03:38:43.369336   38916 fix.go:200] guest clock delta is within tolerance: 60.359936ms
	I1219 03:38:43.369341   38916 start.go:83] releasing machines lock for "test-preload-751780", held for 14.555111037s
	I1219 03:38:43.372020   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:43.372346   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:38:39 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:43.372369   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:43.372866   38916 ssh_runner.go:195] Run: cat /version.json
	I1219 03:38:43.372933   38916 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:38:43.375811   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:43.375941   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:43.376206   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:38:39 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:43.376236   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:43.376296   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:38:39 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:43.376321   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:43.376366   38916 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/test-preload-751780/id_rsa Username:docker}
	I1219 03:38:43.376593   38916 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/test-preload-751780/id_rsa Username:docker}
	I1219 03:38:43.450689   38916 ssh_runner.go:195] Run: systemctl --version
	I1219 03:38:43.490443   38916 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:38:43.636367   38916 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:38:43.642798   38916 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:38:43.642866   38916 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:38:43.660704   38916 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:38:43.660722   38916 start.go:496] detecting cgroup driver to use...
	I1219 03:38:43.660778   38916 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:38:43.679624   38916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:38:43.694787   38916 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:38:43.694836   38916 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:38:43.710475   38916 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:38:43.725032   38916 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:38:43.860470   38916 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:38:44.071074   38916 docker.go:234] disabling docker service ...
	I1219 03:38:44.071147   38916 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:38:44.087280   38916 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:38:44.101020   38916 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:38:44.255142   38916 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:38:44.390467   38916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:38:44.406058   38916 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:38:44.427779   38916 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:38:44.427832   38916 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:38:44.439022   38916 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1219 03:38:44.439083   38916 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:38:44.450604   38916 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:38:44.462084   38916 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:38:44.473128   38916 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:38:44.484431   38916 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:38:44.495454   38916 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:38:44.514036   38916 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:38:44.525175   38916 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:38:44.534762   38916 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 03:38:44.534809   38916 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 03:38:44.553715   38916 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:38:44.564115   38916 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:38:44.700914   38916 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:38:44.801807   38916 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:38:44.801871   38916 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:38:44.807159   38916 start.go:564] Will wait 60s for crictl version
	I1219 03:38:44.807207   38916 ssh_runner.go:195] Run: which crictl
	I1219 03:38:44.810978   38916 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 03:38:44.844677   38916 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1219 03:38:44.844760   38916 ssh_runner.go:195] Run: crio --version
	I1219 03:38:44.871601   38916 ssh_runner.go:195] Run: crio --version
	I1219 03:38:44.898810   38916 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1219 03:38:44.902553   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:44.902991   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:38:39 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:44.903015   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:44.903186   38916 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1219 03:38:44.907474   38916 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:38:44.926012   38916 kubeadm.go:884] updating cluster {Name:test-preload-751780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.3 ClusterName:test-preload-751780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:38:44.926131   38916 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:38:44.926192   38916 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:38:44.957805   38916 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1219 03:38:44.957871   38916 ssh_runner.go:195] Run: which lz4
	I1219 03:38:44.961815   38916 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1219 03:38:44.966290   38916 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1219 03:38:44.966320   38916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
	I1219 03:38:46.149318   38916 crio.go:462] duration metric: took 1.187525575s to copy over tarball
	I1219 03:38:46.149381   38916 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1219 03:38:47.573038   38916 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.423625079s)
	I1219 03:38:47.573073   38916 crio.go:469] duration metric: took 1.423731012s to extract the tarball
	I1219 03:38:47.573081   38916 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1219 03:38:47.609696   38916 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:38:47.654587   38916 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:38:47.654618   38916 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:38:47.654627   38916 kubeadm.go:935] updating node { 192.168.39.240 8443 v1.34.3 crio true true} ...
	I1219 03:38:47.654775   38916 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-751780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:test-preload-751780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:38:47.654871   38916 ssh_runner.go:195] Run: crio config
	I1219 03:38:47.699048   38916 cni.go:84] Creating CNI manager for ""
	I1219 03:38:47.699074   38916 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:38:47.699096   38916 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:38:47.699122   38916 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-751780 NodeName:test-preload-751780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:38:47.699258   38916 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-751780"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.240"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:38:47.699328   38916 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:38:47.711468   38916 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:38:47.711547   38916 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:38:47.722855   38916 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1219 03:38:47.744073   38916 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:38:47.762165   38916 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1219 03:38:47.781188   38916 ssh_runner.go:195] Run: grep 192.168.39.240	control-plane.minikube.internal$ /etc/hosts
	I1219 03:38:47.784948   38916 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:38:47.797902   38916 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:38:47.927949   38916 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:38:47.957697   38916 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/test-preload-751780 for IP: 192.168.39.240
	I1219 03:38:47.957719   38916 certs.go:195] generating shared ca certs ...
	I1219 03:38:47.957735   38916 certs.go:227] acquiring lock for ca certs: {Name:mk433b742ffd55233764aebe3c6f298e0d1f02ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:38:47.957897   38916 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key
	I1219 03:38:47.957965   38916 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key
	I1219 03:38:47.957981   38916 certs.go:257] generating profile certs ...
	I1219 03:38:47.958081   38916 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/test-preload-751780/client.key
	I1219 03:38:47.958153   38916 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/test-preload-751780/apiserver.key.7240ff31
	I1219 03:38:47.958216   38916 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/test-preload-751780/proxy-client.key
	I1219 03:38:47.958373   38916 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem (1338 bytes)
	W1219 03:38:47.958416   38916 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937_empty.pem, impossibly tiny 0 bytes
	I1219 03:38:47.958430   38916 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:38:47.958466   38916 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem (1082 bytes)
	I1219 03:38:47.958500   38916 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:38:47.958534   38916 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem (1679 bytes)
	I1219 03:38:47.958613   38916 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem (1708 bytes)
	I1219 03:38:47.959355   38916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:38:47.994976   38916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:38:48.028505   38916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:38:48.058495   38916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:38:48.086618   38916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/test-preload-751780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1219 03:38:48.112383   38916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/test-preload-751780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1219 03:38:48.139254   38916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/test-preload-751780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:38:48.166441   38916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/test-preload-751780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:38:48.193095   38916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem --> /usr/share/ca-certificates/8937.pem (1338 bytes)
	I1219 03:38:48.218678   38916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /usr/share/ca-certificates/89372.pem (1708 bytes)
	I1219 03:38:48.244462   38916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:38:48.270246   38916 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:38:48.289393   38916 ssh_runner.go:195] Run: openssl version
	I1219 03:38:48.295208   38916 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89372.pem
	I1219 03:38:48.305565   38916 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89372.pem /etc/ssl/certs/89372.pem
	I1219 03:38:48.315705   38916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89372.pem
	I1219 03:38:48.320271   38916 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:46 /usr/share/ca-certificates/89372.pem
	I1219 03:38:48.320314   38916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89372.pem
	I1219 03:38:48.327046   38916 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:38:48.337082   38916 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89372.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:38:48.347274   38916 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:38:48.357317   38916 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:38:48.367342   38916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:38:48.371897   38916 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:38:48.371947   38916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:38:48.378300   38916 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:38:48.388378   38916 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:38:48.398997   38916 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8937.pem
	I1219 03:38:48.409661   38916 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8937.pem /etc/ssl/certs/8937.pem
	I1219 03:38:48.420001   38916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8937.pem
	I1219 03:38:48.424544   38916 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:46 /usr/share/ca-certificates/8937.pem
	I1219 03:38:48.424655   38916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8937.pem
	I1219 03:38:48.431479   38916 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:38:48.441725   38916 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8937.pem /etc/ssl/certs/51391683.0
	I1219 03:38:48.452248   38916 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:38:48.456942   38916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:38:48.463679   38916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:38:48.470283   38916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:38:48.476947   38916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:38:48.483343   38916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:38:48.489823   38916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:38:48.496137   38916 kubeadm.go:401] StartCluster: {Name:test-preload-751780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.3 ClusterName:test-preload-751780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:38:48.496198   38916 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:38:48.496236   38916 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:38:48.527951   38916 cri.go:92] found id: ""
	I1219 03:38:48.528016   38916 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:38:48.539218   38916 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:38:48.539234   38916 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:38:48.539271   38916 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:38:48.549986   38916 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:38:48.550379   38916 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-751780" does not appear in /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:38:48.550471   38916 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-5010/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-751780" cluster setting kubeconfig missing "test-preload-751780" context setting]
	I1219 03:38:48.550780   38916 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:38:48.551244   38916 kapi.go:59] client config for test-preload-751780: &rest.Config{Host:"https://192.168.39.240:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/profiles/test-preload-751780/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/profiles/test-preload-751780/client.key", CAFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1219 03:38:48.551685   38916 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1219 03:38:48.551700   38916 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:38:48.551705   38916 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:38:48.551709   38916 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:38:48.551712   38916 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:38:48.552090   38916 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:38:48.562275   38916 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.240
	I1219 03:38:48.562298   38916 kubeadm.go:1161] stopping kube-system containers ...
	I1219 03:38:48.562307   38916 cri.go:57] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1219 03:38:48.562341   38916 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:38:48.595639   38916 cri.go:92] found id: ""
	I1219 03:38:48.595690   38916 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1219 03:38:48.611960   38916 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:38:48.622609   38916 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:38:48.622622   38916 kubeadm.go:158] found existing configuration files:
	
	I1219 03:38:48.622653   38916 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1219 03:38:48.632086   38916 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:38:48.632134   38916 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:38:48.642070   38916 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1219 03:38:48.651708   38916 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:38:48.651749   38916 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:38:48.661866   38916 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1219 03:38:48.671447   38916 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:38:48.671499   38916 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:38:48.681841   38916 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1219 03:38:48.691502   38916 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:38:48.691549   38916 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:38:48.701590   38916 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:38:48.711913   38916 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:38:48.761883   38916 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:38:50.286952   38916 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.525039048s)
	I1219 03:38:50.287018   38916 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:38:50.534583   38916 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:38:50.593967   38916 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:38:50.676730   38916 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:38:50.676835   38916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:38:51.177614   38916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:38:51.677287   38916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:38:52.177964   38916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:38:52.677153   38916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:38:52.729702   38916 api_server.go:72] duration metric: took 2.052986937s to wait for apiserver process to appear ...
	I1219 03:38:52.729737   38916 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:38:52.729763   38916 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I1219 03:38:52.730298   38916 api_server.go:269] stopped: https://192.168.39.240:8443/healthz: Get "https://192.168.39.240:8443/healthz": dial tcp 192.168.39.240:8443: connect: connection refused
	I1219 03:38:53.229888   38916 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I1219 03:38:55.362818   38916 api_server.go:279] https://192.168.39.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:38:55.362843   38916 api_server.go:103] status: https://192.168.39.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:38:55.362855   38916 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I1219 03:38:55.451557   38916 api_server.go:279] https://192.168.39.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:38:55.451602   38916 api_server.go:103] status: https://192.168.39.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:38:55.730071   38916 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I1219 03:38:55.762585   38916 api_server.go:279] https://192.168.39.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:38:55.762629   38916 api_server.go:103] status: https://192.168.39.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:38:56.230288   38916 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I1219 03:38:56.234732   38916 api_server.go:279] https://192.168.39.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:38:56.234759   38916 api_server.go:103] status: https://192.168.39.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:38:56.730516   38916 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I1219 03:38:56.735354   38916 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I1219 03:38:56.741856   38916 api_server.go:141] control plane version: v1.34.3
	I1219 03:38:56.741883   38916 api_server.go:131] duration metric: took 4.01213727s to wait for apiserver health ...
	I1219 03:38:56.741894   38916 cni.go:84] Creating CNI manager for ""
	I1219 03:38:56.741902   38916 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:38:56.743410   38916 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:38:56.744485   38916 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:38:56.756343   38916 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:38:56.778875   38916 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:38:56.785827   38916 system_pods.go:59] 7 kube-system pods found
	I1219 03:38:56.785870   38916 system_pods.go:61] "coredns-66bc5c9577-wsc95" [7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:38:56.785884   38916 system_pods.go:61] "etcd-test-preload-751780" [75b1fb2e-698c-475e-9f4c-30015425a522] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:38:56.785911   38916 system_pods.go:61] "kube-apiserver-test-preload-751780" [c6b98401-2b0a-475b-9e15-b94bab489e2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:38:56.785926   38916 system_pods.go:61] "kube-controller-manager-test-preload-751780" [d7e250d7-7e96-48d7-9ec4-6bc144d4a7ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:38:56.785970   38916 system_pods.go:61] "kube-proxy-8hbbs" [8e877548-ddd5-4744-9032-fb1d2c274e6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:38:56.785985   38916 system_pods.go:61] "kube-scheduler-test-preload-751780" [5e049f70-50bf-4a72-97b1-ade7d568b673] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:38:56.785995   38916 system_pods.go:61] "storage-provisioner" [5a233c3f-32dd-4d2d-bc52-605c0969517c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:38:56.786003   38916 system_pods.go:74] duration metric: took 7.106586ms to wait for pod list to return data ...
	I1219 03:38:56.786017   38916 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:38:56.790409   38916 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:38:56.790434   38916 node_conditions.go:123] node cpu capacity is 2
	I1219 03:38:56.790450   38916 node_conditions.go:105] duration metric: took 4.427112ms to run NodePressure ...
	I1219 03:38:56.790506   38916 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:38:57.069372   38916 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1219 03:38:57.072479   38916 kubeadm.go:744] kubelet initialised
	I1219 03:38:57.072497   38916 kubeadm.go:745] duration metric: took 3.09897ms waiting for restarted kubelet to initialise ...
	I1219 03:38:57.072510   38916 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:38:57.087266   38916 ops.go:34] apiserver oom_adj: -16
	I1219 03:38:57.087283   38916 kubeadm.go:602] duration metric: took 8.54804329s to restartPrimaryControlPlane
	I1219 03:38:57.087291   38916 kubeadm.go:403] duration metric: took 8.591159475s to StartCluster
	I1219 03:38:57.087304   38916 settings.go:142] acquiring lock: {Name:mkb131693a40b9aa50c302192272518c3561c861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:38:57.087364   38916 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:38:57.087993   38916 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:38:57.088201   38916 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:38:57.088321   38916 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:38:57.088388   38916 addons.go:70] Setting storage-provisioner=true in profile "test-preload-751780"
	I1219 03:38:57.088392   38916 config.go:182] Loaded profile config "test-preload-751780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:38:57.088406   38916 addons.go:239] Setting addon storage-provisioner=true in "test-preload-751780"
	W1219 03:38:57.088414   38916 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:38:57.088415   38916 addons.go:70] Setting default-storageclass=true in profile "test-preload-751780"
	I1219 03:38:57.088439   38916 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-751780"
	I1219 03:38:57.088443   38916 host.go:66] Checking if "test-preload-751780" exists ...
	I1219 03:38:57.090351   38916 out.go:179] * Verifying Kubernetes components...
	I1219 03:38:57.090551   38916 kapi.go:59] client config for test-preload-751780: &rest.Config{Host:"https://192.168.39.240:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/profiles/test-preload-751780/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/profiles/test-preload-751780/client.key", CAFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1219 03:38:57.090867   38916 addons.go:239] Setting addon default-storageclass=true in "test-preload-751780"
	W1219 03:38:57.090882   38916 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:38:57.090899   38916 host.go:66] Checking if "test-preload-751780" exists ...
	I1219 03:38:57.091712   38916 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:38:57.091714   38916 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:38:57.092310   38916 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:38:57.092326   38916 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:38:57.093041   38916 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:38:57.093057   38916 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:38:57.094999   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:57.095456   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:38:39 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:57.095485   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:57.095626   38916 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/test-preload-751780/id_rsa Username:docker}
	I1219 03:38:57.095700   38916 main.go:144] libmachine: domain test-preload-751780 has defined MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:57.096044   38916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3c:d7", ip: ""} in network mk-test-preload-751780: {Iface:virbr1 ExpiryTime:2025-12-19 04:38:39 +0000 UTC Type:0 Mac:52:54:00:8d:3c:d7 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:test-preload-751780 Clientid:01:52:54:00:8d:3c:d7}
	I1219 03:38:57.096066   38916 main.go:144] libmachine: domain test-preload-751780 has defined IP address 192.168.39.240 and MAC address 52:54:00:8d:3c:d7 in network mk-test-preload-751780
	I1219 03:38:57.096196   38916 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/test-preload-751780/id_rsa Username:docker}
	I1219 03:38:57.283860   38916 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:38:57.306304   38916 node_ready.go:35] waiting up to 6m0s for node "test-preload-751780" to be "Ready" ...
	I1219 03:38:57.309408   38916 node_ready.go:49] node "test-preload-751780" is "Ready"
	I1219 03:38:57.309430   38916 node_ready.go:38] duration metric: took 3.088541ms for node "test-preload-751780" to be "Ready" ...
	I1219 03:38:57.309441   38916 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:38:57.309501   38916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:38:57.329476   38916 api_server.go:72] duration metric: took 241.248084ms to wait for apiserver process to appear ...
	I1219 03:38:57.329496   38916 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:38:57.329513   38916 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I1219 03:38:57.334043   38916 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I1219 03:38:57.335032   38916 api_server.go:141] control plane version: v1.34.3
	I1219 03:38:57.335053   38916 api_server.go:131] duration metric: took 5.549294ms to wait for apiserver health ...
	I1219 03:38:57.335062   38916 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:38:57.338014   38916 system_pods.go:59] 7 kube-system pods found
	I1219 03:38:57.338043   38916 system_pods.go:61] "coredns-66bc5c9577-wsc95" [7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:38:57.338052   38916 system_pods.go:61] "etcd-test-preload-751780" [75b1fb2e-698c-475e-9f4c-30015425a522] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:38:57.338062   38916 system_pods.go:61] "kube-apiserver-test-preload-751780" [c6b98401-2b0a-475b-9e15-b94bab489e2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:38:57.338071   38916 system_pods.go:61] "kube-controller-manager-test-preload-751780" [d7e250d7-7e96-48d7-9ec4-6bc144d4a7ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:38:57.338077   38916 system_pods.go:61] "kube-proxy-8hbbs" [8e877548-ddd5-4744-9032-fb1d2c274e6b] Running
	I1219 03:38:57.338086   38916 system_pods.go:61] "kube-scheduler-test-preload-751780" [5e049f70-50bf-4a72-97b1-ade7d568b673] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:38:57.338095   38916 system_pods.go:61] "storage-provisioner" [5a233c3f-32dd-4d2d-bc52-605c0969517c] Running
	I1219 03:38:57.338104   38916 system_pods.go:74] duration metric: took 3.034015ms to wait for pod list to return data ...
	I1219 03:38:57.338114   38916 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:38:57.340883   38916 default_sa.go:45] found service account: "default"
	I1219 03:38:57.340901   38916 default_sa.go:55] duration metric: took 2.77989ms for default service account to be created ...
	I1219 03:38:57.340909   38916 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:38:57.343520   38916 system_pods.go:86] 7 kube-system pods found
	I1219 03:38:57.343547   38916 system_pods.go:89] "coredns-66bc5c9577-wsc95" [7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:38:57.343556   38916 system_pods.go:89] "etcd-test-preload-751780" [75b1fb2e-698c-475e-9f4c-30015425a522] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:38:57.343584   38916 system_pods.go:89] "kube-apiserver-test-preload-751780" [c6b98401-2b0a-475b-9e15-b94bab489e2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:38:57.343598   38916 system_pods.go:89] "kube-controller-manager-test-preload-751780" [d7e250d7-7e96-48d7-9ec4-6bc144d4a7ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:38:57.343605   38916 system_pods.go:89] "kube-proxy-8hbbs" [8e877548-ddd5-4744-9032-fb1d2c274e6b] Running
	I1219 03:38:57.343619   38916 system_pods.go:89] "kube-scheduler-test-preload-751780" [5e049f70-50bf-4a72-97b1-ade7d568b673] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:38:57.343628   38916 system_pods.go:89] "storage-provisioner" [5a233c3f-32dd-4d2d-bc52-605c0969517c] Running
	I1219 03:38:57.343637   38916 system_pods.go:126] duration metric: took 2.721569ms to wait for k8s-apps to be running ...
	I1219 03:38:57.343647   38916 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:38:57.343695   38916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:38:57.359503   38916 system_svc.go:56] duration metric: took 15.852314ms WaitForService to wait for kubelet
	I1219 03:38:57.359524   38916 kubeadm.go:587] duration metric: took 271.297961ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:38:57.359543   38916 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:38:57.362148   38916 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:38:57.362166   38916 node_conditions.go:123] node cpu capacity is 2
	I1219 03:38:57.362179   38916 node_conditions.go:105] duration metric: took 2.630587ms to run NodePressure ...
	I1219 03:38:57.362192   38916 start.go:242] waiting for startup goroutines ...
	I1219 03:38:57.381450   38916 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:38:57.385865   38916 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:38:58.063806   38916 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1219 03:38:58.064860   38916 addons.go:546] duration metric: took 976.547928ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1219 03:38:58.064895   38916 start.go:247] waiting for cluster config update ...
	I1219 03:38:58.064905   38916 start.go:256] writing updated cluster config ...
	I1219 03:38:58.065114   38916 ssh_runner.go:195] Run: rm -f paused
	I1219 03:38:58.070170   38916 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:38:58.070552   38916 kapi.go:59] client config for test-preload-751780: &rest.Config{Host:"https://192.168.39.240:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/profiles/test-preload-751780/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/profiles/test-preload-751780/client.key", CAFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1219 03:38:58.074127   38916 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wsc95" in "kube-system" namespace to be "Ready" or be gone ...
	W1219 03:39:00.081838   38916 pod_ready.go:104] pod "coredns-66bc5c9577-wsc95" is not "Ready", error: <nil>
	W1219 03:39:02.580265   38916 pod_ready.go:104] pod "coredns-66bc5c9577-wsc95" is not "Ready", error: <nil>
	W1219 03:39:04.580504   38916 pod_ready.go:104] pod "coredns-66bc5c9577-wsc95" is not "Ready", error: <nil>
	W1219 03:39:06.581217   38916 pod_ready.go:104] pod "coredns-66bc5c9577-wsc95" is not "Ready", error: <nil>
	I1219 03:39:08.082070   38916 pod_ready.go:94] pod "coredns-66bc5c9577-wsc95" is "Ready"
	I1219 03:39:08.082094   38916 pod_ready.go:86] duration metric: took 10.007943185s for pod "coredns-66bc5c9577-wsc95" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:39:08.084522   38916 pod_ready.go:83] waiting for pod "etcd-test-preload-751780" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:39:09.090035   38916 pod_ready.go:94] pod "etcd-test-preload-751780" is "Ready"
	I1219 03:39:09.090073   38916 pod_ready.go:86] duration metric: took 1.005532083s for pod "etcd-test-preload-751780" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:39:09.092244   38916 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-751780" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:39:09.096484   38916 pod_ready.go:94] pod "kube-apiserver-test-preload-751780" is "Ready"
	I1219 03:39:09.096503   38916 pod_ready.go:86] duration metric: took 4.240559ms for pod "kube-apiserver-test-preload-751780" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:39:09.098172   38916 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-751780" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:39:09.101952   38916 pod_ready.go:94] pod "kube-controller-manager-test-preload-751780" is "Ready"
	I1219 03:39:09.101975   38916 pod_ready.go:86] duration metric: took 3.783324ms for pod "kube-controller-manager-test-preload-751780" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:39:09.277746   38916 pod_ready.go:83] waiting for pod "kube-proxy-8hbbs" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:39:09.678413   38916 pod_ready.go:94] pod "kube-proxy-8hbbs" is "Ready"
	I1219 03:39:09.678448   38916 pod_ready.go:86] duration metric: took 400.667561ms for pod "kube-proxy-8hbbs" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:39:09.877605   38916 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-751780" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:39:10.277222   38916 pod_ready.go:94] pod "kube-scheduler-test-preload-751780" is "Ready"
	I1219 03:39:10.277258   38916 pod_ready.go:86] duration metric: took 399.61992ms for pod "kube-scheduler-test-preload-751780" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:39:10.277273   38916 pod_ready.go:40] duration metric: took 12.207079999s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:39:10.317596   38916 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:39:10.319253   38916 out.go:179] * Done! kubectl is now configured to use "test-preload-751780" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.031505499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766115551031485825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135813,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be950681-3edb-47c0-a932-52ddbd34dd9d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.032453240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4aa42ba1-8f90-4dc9-946c-502b9eb77786 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.032514156Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4aa42ba1-8f90-4dc9-946c-502b9eb77786 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.032756318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:811525ad38ce89d37c99d8e1d1fd662ddca4037ddd3cb076f780a5102fc7a0e6,PodSandboxId:d30d1c5158c0ea35635bd0812702c6809947098f0c7737bedc9589bbd8165536,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766115539665872518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wsc95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a14ba0f64f44df57c3df42bf6714c3434720b04263ad4bdee6fafbfe597601,PodSandboxId:e33ac5057021cc1ccd1ad19fe404f7352183ac83cc5c006433a3f42c936187d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766115536078486684,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e877548-ddd5-4744-9032-fb1d2c274e6b,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d6dbb871aa463cb901830c10f2d9987e3d88cd78c8f96c9f4cc04785d21bbf,PodSandboxId:cb656da5429228927de279ffc687cd3056ccd72168efdd5d88b36a4e07da5771,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766115536041130425,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a233c3f-32dd-4d2d-bc52-605c0969517c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14d509559da95b55d07c8c7acb65cde70630aaf9f0ebd3565a1f142d2dd5a80d,PodSandboxId:24ab5413ec023729bc8c213566258afc8a1128bc1f3a673110e539fc849f661d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766115532490486876,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-751780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f6fd07a82c634b83a18f65e7b01e096,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c01f11a05aaa52fea185262cac5199c7bf1d1f6f0b9eb989bb641b06abc7d85c,PodSandboxId:792fb79a19441a92938bf275dfdd1339601a0c5b5b0fed9a84bc7436f4aa0b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUN
NING,CreatedAt:1766115532482897499,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-751780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605ddbe182503b490085c79658eed756,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c0d17ebb5961ed57f81f43d90a6cca91b5fd5232f120a7e2ab5f55398f7a922,PodSandboxId:364dee2e6c18a0ef4f17abf3d548bb4abab9695b26825b212896c2e3695bd89e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766115532448641414,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-751780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aafe83ed78554d04dab2eb5723736fd6,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f149474fc20f05a18c0ca75f1850df1e49b7b26094445c2066525c13d8adac,PodSandboxId:1755f448e71d133467855324377ade3766e7a1eb9ef88d74782c2f17ad59c435,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Imag
eSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766115532433704004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-751780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310a18c35218247d13e362d03dae2ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4aa42ba1-8f90-4dc9-946c-502b9eb77786 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.064253565Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3075fc4-499f-4892-82b7-089e72fb93d8 name=/runtime.v1.RuntimeService/Version
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.064326568Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3075fc4-499f-4892-82b7-089e72fb93d8 name=/runtime.v1.RuntimeService/Version
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.065931993Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51b788ee-8872-4009-9da1-593c25678a6d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.066441931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766115551066419397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135813,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51b788ee-8872-4009-9da1-593c25678a6d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.067543772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=717d6d25-1002-4af9-9971-1d7af60eaa57 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.067660053Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=717d6d25-1002-4af9-9971-1d7af60eaa57 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.068099141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:811525ad38ce89d37c99d8e1d1fd662ddca4037ddd3cb076f780a5102fc7a0e6,PodSandboxId:d30d1c5158c0ea35635bd0812702c6809947098f0c7737bedc9589bbd8165536,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766115539665872518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wsc95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a14ba0f64f44df57c3df42bf6714c3434720b04263ad4bdee6fafbfe597601,PodSandboxId:e33ac5057021cc1ccd1ad19fe404f7352183ac83cc5c006433a3f42c936187d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766115536078486684,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e877548-ddd5-4744-9032-fb1d2c274e6b,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d6dbb871aa463cb901830c10f2d9987e3d88cd78c8f96c9f4cc04785d21bbf,PodSandboxId:cb656da5429228927de279ffc687cd3056ccd72168efdd5d88b36a4e07da5771,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766115536041130425,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a233c3f-32dd-4d2d-bc52-605c0969517c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14d509559da95b55d07c8c7acb65cde70630aaf9f0ebd3565a1f142d2dd5a80d,PodSandboxId:24ab5413ec023729bc8c213566258afc8a1128bc1f3a673110e539fc849f661d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766115532490486876,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-751780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f6fd07a82c634b83a18f65e7b01e096,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c01f11a05aaa52fea185262cac5199c7bf1d1f6f0b9eb989bb641b06abc7d85c,PodSandboxId:792fb79a19441a92938bf275dfdd1339601a0c5b5b0fed9a84bc7436f4aa0b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUN
NING,CreatedAt:1766115532482897499,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-751780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605ddbe182503b490085c79658eed756,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c0d17ebb5961ed57f81f43d90a6cca91b5fd5232f120a7e2ab5f55398f7a922,PodSandboxId:364dee2e6c18a0ef4f17abf3d548bb4abab9695b26825b212896c2e3695bd89e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766115532448641414,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-751780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aafe83ed78554d04dab2eb5723736fd6,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f149474fc20f05a18c0ca75f1850df1e49b7b26094445c2066525c13d8adac,PodSandboxId:1755f448e71d133467855324377ade3766e7a1eb9ef88d74782c2f17ad59c435,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Imag
eSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766115532433704004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-751780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310a18c35218247d13e362d03dae2ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=717d6d25-1002-4af9-9971-1d7af60eaa57 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.100706027Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=445ef9c1-46d1-47f4-81f2-21eb0a5c84da name=/runtime.v1.RuntimeService/Version
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.100763990Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=445ef9c1-46d1-47f4-81f2-21eb0a5c84da name=/runtime.v1.RuntimeService/Version
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.102306076Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5144204-fdbc-44a2-bda4-9d43757633c0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.102735366Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766115551102713847,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135813,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5144204-fdbc-44a2-bda4-9d43757633c0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.103607017Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4949ec08-78dd-461b-b780-35efb89dad02 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.103652178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4949ec08-78dd-461b-b780-35efb89dad02 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.103784796Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:811525ad38ce89d37c99d8e1d1fd662ddca4037ddd3cb076f780a5102fc7a0e6,PodSandboxId:d30d1c5158c0ea35635bd0812702c6809947098f0c7737bedc9589bbd8165536,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766115539665872518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wsc95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a14ba0f64f44df57c3df42bf6714c3434720b04263ad4bdee6fafbfe597601,PodSandboxId:e33ac5057021cc1ccd1ad19fe404f7352183ac83cc5c006433a3f42c936187d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766115536078486684,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e877548-ddd5-4744-9032-fb1d2c274e6b,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d6dbb871aa463cb901830c10f2d9987e3d88cd78c8f96c9f4cc04785d21bbf,PodSandboxId:cb656da5429228927de279ffc687cd3056ccd72168efdd5d88b36a4e07da5771,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766115536041130425,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a233c3f-32dd-4d2d-bc52-605c0969517c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14d509559da95b55d07c8c7acb65cde70630aaf9f0ebd3565a1f142d2dd5a80d,PodSandboxId:24ab5413ec023729bc8c213566258afc8a1128bc1f3a673110e539fc849f661d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766115532490486876,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-751780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f6fd07a82c634b83a18f65e7b01e096,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c01f11a05aaa52fea185262cac5199c7bf1d1f6f0b9eb989bb641b06abc7d85c,PodSandboxId:792fb79a19441a92938bf275dfdd1339601a0c5b5b0fed9a84bc7436f4aa0b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUN
NING,CreatedAt:1766115532482897499,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-751780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605ddbe182503b490085c79658eed756,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c0d17ebb5961ed57f81f43d90a6cca91b5fd5232f120a7e2ab5f55398f7a922,PodSandboxId:364dee2e6c18a0ef4f17abf3d548bb4abab9695b26825b212896c2e3695bd89e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766115532448641414,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-751780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aafe83ed78554d04dab2eb5723736fd6,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f149474fc20f05a18c0ca75f1850df1e49b7b26094445c2066525c13d8adac,PodSandboxId:1755f448e71d133467855324377ade3766e7a1eb9ef88d74782c2f17ad59c435,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Imag
eSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766115532433704004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-751780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310a18c35218247d13e362d03dae2ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4949ec08-78dd-461b-b780-35efb89dad02 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.130246765Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4fb92a6a-7b4a-49d4-8892-abe94742e1cf name=/runtime.v1.RuntimeService/Version
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.130316250Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4fb92a6a-7b4a-49d4-8892-abe94742e1cf name=/runtime.v1.RuntimeService/Version
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.131231113Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=65b93013-9ef3-411d-aa15-e27c8618ea8b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.131652187Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766115551131632812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135813,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65b93013-9ef3-411d-aa15-e27c8618ea8b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.132505153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48525085-efad-434b-b8d7-345cd226a305 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.132562567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48525085-efad-434b-b8d7-345cd226a305 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:39:11 test-preload-751780 crio[837]: time="2025-12-19 03:39:11.132955995Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:811525ad38ce89d37c99d8e1d1fd662ddca4037ddd3cb076f780a5102fc7a0e6,PodSandboxId:d30d1c5158c0ea35635bd0812702c6809947098f0c7737bedc9589bbd8165536,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766115539665872518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wsc95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a14ba0f64f44df57c3df42bf6714c3434720b04263ad4bdee6fafbfe597601,PodSandboxId:e33ac5057021cc1ccd1ad19fe404f7352183ac83cc5c006433a3f42c936187d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766115536078486684,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbbs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e877548-ddd5-4744-9032-fb1d2c274e6b,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d6dbb871aa463cb901830c10f2d9987e3d88cd78c8f96c9f4cc04785d21bbf,PodSandboxId:cb656da5429228927de279ffc687cd3056ccd72168efdd5d88b36a4e07da5771,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766115536041130425,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a233c3f-32dd-4d2d-bc52-605c0969517c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14d509559da95b55d07c8c7acb65cde70630aaf9f0ebd3565a1f142d2dd5a80d,PodSandboxId:24ab5413ec023729bc8c213566258afc8a1128bc1f3a673110e539fc849f661d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766115532490486876,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-751780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f6fd07a82c634b83a18f65e7b01e096,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c01f11a05aaa52fea185262cac5199c7bf1d1f6f0b9eb989bb641b06abc7d85c,PodSandboxId:792fb79a19441a92938bf275dfdd1339601a0c5b5b0fed9a84bc7436f4aa0b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUN
NING,CreatedAt:1766115532482897499,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-751780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605ddbe182503b490085c79658eed756,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c0d17ebb5961ed57f81f43d90a6cca91b5fd5232f120a7e2ab5f55398f7a922,PodSandboxId:364dee2e6c18a0ef4f17abf3d548bb4abab9695b26825b212896c2e3695bd89e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766115532448641414,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-751780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aafe83ed78554d04dab2eb5723736fd6,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f149474fc20f05a18c0ca75f1850df1e49b7b26094445c2066525c13d8adac,PodSandboxId:1755f448e71d133467855324377ade3766e7a1eb9ef88d74782c2f17ad59c435,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Imag
eSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766115532433704004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-751780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310a18c35218247d13e362d03dae2ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48525085-efad-434b-b8d7-345cd226a305 name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	811525ad38ce8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   1                   d30d1c5158c0e       coredns-66bc5c9577-wsc95                      kube-system
	49a14ba0f64f4       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   15 seconds ago      Running             kube-proxy                1                   e33ac5057021c       kube-proxy-8hbbs                              kube-system
	55d6dbb871aa4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   cb656da542922       storage-provisioner                           kube-system
	14d509559da95       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   18 seconds ago      Running             etcd                      1                   24ab5413ec023       etcd-test-preload-751780                      kube-system
	c01f11a05aaa5       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   18 seconds ago      Running             kube-controller-manager   1                   792fb79a19441       kube-controller-manager-test-preload-751780   kube-system
	5c0d17ebb5961       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   18 seconds ago      Running             kube-apiserver            1                   364dee2e6c18a       kube-apiserver-test-preload-751780            kube-system
	33f149474fc20       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   18 seconds ago      Running             kube-scheduler            1                   1755f448e71d1       kube-scheduler-test-preload-751780            kube-system
	
	
	==> coredns [811525ad38ce89d37c99d8e1d1fd662ddca4037ddd3cb076f780a5102fc7a0e6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48468 - 11019 "HINFO IN 6557800436290041627.6970907043068401939. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.159363747s
	
	
	==> describe nodes <==
	Name:               test-preload-751780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-751780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=test-preload-751780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_37_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:37:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-751780
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:39:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:38:56 +0000   Fri, 19 Dec 2025 03:37:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:38:56 +0000   Fri, 19 Dec 2025 03:37:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:38:56 +0000   Fri, 19 Dec 2025 03:37:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:38:56 +0000   Fri, 19 Dec 2025 03:38:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    test-preload-751780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 3785bfd25e6f411898917ee91ed1ffe7
	  System UUID:                3785bfd2-5e6f-4118-9891-7ee91ed1ffe7
	  Boot ID:                    e8b7b8d5-0562-4936-993d-27b987bf81bb
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-wsc95                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     96s
	  kube-system                 etcd-test-preload-751780                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         101s
	  kube-system                 kube-apiserver-test-preload-751780             250m (12%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-controller-manager-test-preload-751780    200m (10%)    0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-proxy-8hbbs                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-scheduler-test-preload-751780             100m (5%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 94s                kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   NodeHasSufficientMemory  101s               kubelet          Node test-preload-751780 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  101s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    101s               kubelet          Node test-preload-751780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     101s               kubelet          Node test-preload-751780 status is now: NodeHasSufficientPID
	  Normal   Starting                 101s               kubelet          Starting kubelet.
	  Normal   NodeReady                100s               kubelet          Node test-preload-751780 status is now: NodeReady
	  Normal   RegisteredNode           97s                node-controller  Node test-preload-751780 event: Registered Node test-preload-751780 in Controller
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-751780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-751780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-751780 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                kubelet          Node test-preload-751780 has been rebooted, boot id: e8b7b8d5-0562-4936-993d-27b987bf81bb
	  Normal   RegisteredNode           13s                node-controller  Node test-preload-751780 event: Registered Node test-preload-751780 in Controller
	
	
	==> dmesg <==
	[Dec19 03:38] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001572] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004337] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.986358] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.112004] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.096510] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.506389] kauditd_printk_skb: 168 callbacks suppressed
	[Dec19 03:39] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [14d509559da95b55d07c8c7acb65cde70630aaf9f0ebd3565a1f142d2dd5a80d] <==
	{"level":"warn","ts":"2025-12-19T03:38:54.363861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.385196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.394711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.405706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.414954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.443491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.452831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.466271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.479481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.494529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.508749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.520067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.533288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.543451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.557639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.569887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.594358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.603096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.610886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.620534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.634383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.645457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.656740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.664732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:38:54.713090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35970","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:39:11 up 0 min,  0 users,  load average: 0.34, 0.09, 0.03
	Linux test-preload-751780 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [5c0d17ebb5961ed57f81f43d90a6cca91b5fd5232f120a7e2ab5f55398f7a922] <==
	I1219 03:38:55.492079       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1219 03:38:55.492265       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1219 03:38:55.492293       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1219 03:38:55.492298       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1219 03:38:55.492894       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1219 03:38:55.494344       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1219 03:38:55.503211       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1219 03:38:55.505224       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1219 03:38:55.505284       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1219 03:38:55.505827       1 policy_source.go:240] refreshing policies
	I1219 03:38:55.505928       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1219 03:38:55.505992       1 aggregator.go:171] initial CRD sync complete...
	I1219 03:38:55.506014       1 autoregister_controller.go:144] Starting autoregister controller
	I1219 03:38:55.506088       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1219 03:38:55.506128       1 cache.go:39] Caches are synced for autoregister controller
	I1219 03:38:55.524512       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1219 03:38:55.742164       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1219 03:38:56.296857       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1219 03:38:56.874912       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:38:56.906126       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:38:56.930610       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:38:56.936477       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:38:58.887363       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:38:59.086235       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 03:38:59.137215       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c01f11a05aaa52fea185262cac5199c7bf1d1f6f0b9eb989bb641b06abc7d85c] <==
	I1219 03:38:58.783341       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1219 03:38:58.783446       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1219 03:38:58.784211       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1219 03:38:58.785445       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1219 03:38:58.786604       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1219 03:38:58.786620       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 03:38:58.790425       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:38:58.791579       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1219 03:38:58.798986       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:38:58.800186       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1219 03:38:58.807511       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 03:38:58.810793       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1219 03:38:58.815007       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1219 03:38:58.815106       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:38:58.821398       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:38:58.821423       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 03:38:58.821429       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1219 03:38:58.821538       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:38:58.823192       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1219 03:38:58.826057       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 03:38:58.827949       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1219 03:38:58.832615       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1219 03:38:58.832649       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:38:58.833756       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1219 03:38:58.833920       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [49a14ba0f64f44df57c3df42bf6714c3434720b04263ad4bdee6fafbfe597601] <==
	I1219 03:38:56.266555       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:38:56.367519       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:38:56.367618       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.240"]
	E1219 03:38:56.368018       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:38:56.443222       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:38:56.443283       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:38:56.443309       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:38:56.456189       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:38:56.456422       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:38:56.456850       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:38:56.462518       1 config.go:200] "Starting service config controller"
	I1219 03:38:56.462542       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:38:56.462570       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:38:56.462574       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:38:56.462587       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:38:56.462590       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:38:56.462923       1 config.go:309] "Starting node config controller"
	I1219 03:38:56.463264       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:38:56.562691       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:38:56.562728       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:38:56.562751       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:38:56.564102       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [33f149474fc20f05a18c0ca75f1850df1e49b7b26094445c2066525c13d8adac] <==
	I1219 03:38:54.605402       1 serving.go:386] Generated self-signed cert in-memory
	I1219 03:38:55.655205       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:38:55.655253       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:38:55.666453       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1219 03:38:55.666537       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1219 03:38:55.666616       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:38:55.666636       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:38:55.666652       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:38:55.666656       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:38:55.668798       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:38:55.668895       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:38:55.767155       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:38:55.768265       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1219 03:38:55.768326       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 03:38:55 test-preload-751780 kubelet[1179]: I1219 03:38:55.577412    1179 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 19 03:38:55 test-preload-751780 kubelet[1179]: I1219 03:38:55.578531    1179 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 19 03:38:55 test-preload-751780 kubelet[1179]: I1219 03:38:55.579465    1179 setters.go:543] "Node became not ready" node="test-preload-751780" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-19T03:38:55Z","lastTransitionTime":"2025-12-19T03:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Dec 19 03:38:55 test-preload-751780 kubelet[1179]: I1219 03:38:55.588904    1179 apiserver.go:52] "Watching apiserver"
	Dec 19 03:38:55 test-preload-751780 kubelet[1179]: E1219 03:38:55.599254    1179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-wsc95" podUID="7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe"
	Dec 19 03:38:55 test-preload-751780 kubelet[1179]: I1219 03:38:55.697360    1179 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 19 03:38:55 test-preload-751780 kubelet[1179]: I1219 03:38:55.722464    1179 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-751780"
	Dec 19 03:38:55 test-preload-751780 kubelet[1179]: I1219 03:38:55.723380    1179 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-751780"
	Dec 19 03:38:55 test-preload-751780 kubelet[1179]: I1219 03:38:55.730918    1179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5a233c3f-32dd-4d2d-bc52-605c0969517c-tmp\") pod \"storage-provisioner\" (UID: \"5a233c3f-32dd-4d2d-bc52-605c0969517c\") " pod="kube-system/storage-provisioner"
	Dec 19 03:38:55 test-preload-751780 kubelet[1179]: E1219 03:38:55.731948    1179 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 19 03:38:55 test-preload-751780 kubelet[1179]: E1219 03:38:55.732005    1179 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe-config-volume podName:7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe nodeName:}" failed. No retries permitted until 2025-12-19 03:38:56.231988857 +0000 UTC m=+5.745195914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe-config-volume") pod "coredns-66bc5c9577-wsc95" (UID: "7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe") : object "kube-system"/"coredns" not registered
	Dec 19 03:38:55 test-preload-751780 kubelet[1179]: I1219 03:38:55.732200    1179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e877548-ddd5-4744-9032-fb1d2c274e6b-lib-modules\") pod \"kube-proxy-8hbbs\" (UID: \"8e877548-ddd5-4744-9032-fb1d2c274e6b\") " pod="kube-system/kube-proxy-8hbbs"
	Dec 19 03:38:55 test-preload-751780 kubelet[1179]: I1219 03:38:55.732249    1179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e877548-ddd5-4744-9032-fb1d2c274e6b-xtables-lock\") pod \"kube-proxy-8hbbs\" (UID: \"8e877548-ddd5-4744-9032-fb1d2c274e6b\") " pod="kube-system/kube-proxy-8hbbs"
	Dec 19 03:38:55 test-preload-751780 kubelet[1179]: E1219 03:38:55.750316    1179 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-751780\" already exists" pod="kube-system/etcd-test-preload-751780"
	Dec 19 03:38:55 test-preload-751780 kubelet[1179]: E1219 03:38:55.750320    1179 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-751780\" already exists" pod="kube-system/kube-scheduler-test-preload-751780"
	Dec 19 03:38:56 test-preload-751780 kubelet[1179]: E1219 03:38:56.235282    1179 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 19 03:38:56 test-preload-751780 kubelet[1179]: E1219 03:38:56.235359    1179 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe-config-volume podName:7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe nodeName:}" failed. No retries permitted until 2025-12-19 03:38:57.235340709 +0000 UTC m=+6.748547766 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe-config-volume") pod "coredns-66bc5c9577-wsc95" (UID: "7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe") : object "kube-system"/"coredns" not registered
	Dec 19 03:38:56 test-preload-751780 kubelet[1179]: I1219 03:38:56.776159    1179 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 19 03:38:57 test-preload-751780 kubelet[1179]: E1219 03:38:57.241312    1179 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 19 03:38:57 test-preload-751780 kubelet[1179]: E1219 03:38:57.241395    1179 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe-config-volume podName:7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe nodeName:}" failed. No retries permitted until 2025-12-19 03:38:59.241378617 +0000 UTC m=+8.754585675 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe-config-volume") pod "coredns-66bc5c9577-wsc95" (UID: "7dbb33d2-d6cd-4e64-91fc-e8fa27ed17fe") : object "kube-system"/"coredns" not registered
	Dec 19 03:39:00 test-preload-751780 kubelet[1179]: E1219 03:39:00.668336    1179 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766115540667690634  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135813}  inodes_used:{value:64}}"
	Dec 19 03:39:00 test-preload-751780 kubelet[1179]: E1219 03:39:00.668374    1179 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766115540667690634  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135813}  inodes_used:{value:64}}"
	Dec 19 03:39:07 test-preload-751780 kubelet[1179]: I1219 03:39:07.987956    1179 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 19 03:39:10 test-preload-751780 kubelet[1179]: E1219 03:39:10.669902    1179 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766115550669653349  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135813}  inodes_used:{value:64}}"
	Dec 19 03:39:10 test-preload-751780 kubelet[1179]: E1219 03:39:10.669924    1179 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766115550669653349  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135813}  inodes_used:{value:64}}"
	
	
	==> storage-provisioner [55d6dbb871aa463cb901830c10f2d9987e3d88cd78c8f96c9f4cc04785d21bbf] <==
	I1219 03:38:56.152137       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-751780 -n test-preload-751780
helpers_test.go:270: (dbg) Run:  kubectl --context test-preload-751780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:176: Cleaning up "test-preload-751780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-751780
E1219 03:39:12.683409    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestPreload (146.77s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (67.19s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-813136 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-813136 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.909909224s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-813136] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-813136" primary control-plane node in "pause-813136" cluster
	* Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-813136" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 03:44:12.215871   44926 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:44:12.216109   44926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:44:12.216128   44926 out.go:374] Setting ErrFile to fd 2...
	I1219 03:44:12.216153   44926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:44:12.216544   44926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:44:12.217337   44926 out.go:368] Setting JSON to false
	I1219 03:44:12.218758   44926 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5196,"bootTime":1766110656,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:44:12.218852   44926 start.go:143] virtualization: kvm guest
	I1219 03:44:12.282418   44926 out.go:179] * [pause-813136] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:44:12.299519   44926 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:44:12.299508   44926 notify.go:221] Checking for updates...
	I1219 03:44:12.318111   44926 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:44:12.319403   44926 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:44:12.320587   44926 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 03:44:12.321712   44926 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:44:12.322871   44926 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:44:12.324824   44926 config.go:182] Loaded profile config "pause-813136": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:44:12.325699   44926 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:44:12.369796   44926 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 03:44:12.370936   44926 start.go:309] selected driver: kvm2
	I1219 03:44:12.370958   44926 start.go:928] validating driver "kvm2" against &{Name:pause-813136 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.3 ClusterName:pause-813136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.162 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:44:12.371099   44926 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:44:12.372263   44926 cni.go:84] Creating CNI manager for ""
	I1219 03:44:12.372327   44926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:44:12.372384   44926 start.go:353] cluster config:
	{Name:pause-813136 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-813136 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.162 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:44:12.372503   44926 iso.go:125] acquiring lock: {Name:mk42290af04a74a4dddf27fa33aac85c9bdccfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:44:12.373863   44926 out.go:179] * Starting "pause-813136" primary control-plane node in "pause-813136" cluster
	I1219 03:44:12.375329   44926 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:44:12.375363   44926 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:44:12.375371   44926 cache.go:65] Caching tarball of preloaded images
	I1219 03:44:12.375485   44926 preload.go:238] Found /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:44:12.375502   44926 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:44:12.375669   44926 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/pause-813136/config.json ...
	I1219 03:44:12.375959   44926 start.go:360] acquireMachinesLock for pause-813136: {Name:mk229398d29442d4d52885bbac963e5004bfbfba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:44:18.026968   44926 start.go:364] duration metric: took 5.650962597s to acquireMachinesLock for "pause-813136"
	I1219 03:44:18.027019   44926 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:44:18.027028   44926 fix.go:54] fixHost starting: 
	I1219 03:44:18.029468   44926 fix.go:112] recreateIfNeeded on pause-813136: state=Running err=<nil>
	W1219 03:44:18.029494   44926 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:44:18.034698   44926 out.go:252] * Updating the running kvm2 "pause-813136" VM ...
	I1219 03:44:18.034741   44926 machine.go:94] provisionDockerMachine start ...
	I1219 03:44:18.038017   44926 main.go:144] libmachine: domain pause-813136 has defined MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:18.038452   44926 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:11:07:28", ip: ""} in network mk-pause-813136: {Iface:virbr2 ExpiryTime:2025-12-19 04:43:05 +0000 UTC Type:0 Mac:52:54:00:11:07:28 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:pause-813136 Clientid:01:52:54:00:11:07:28}
	I1219 03:44:18.038488   44926 main.go:144] libmachine: domain pause-813136 has defined IP address 192.168.50.162 and MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:18.038718   44926 main.go:144] libmachine: Using SSH client type: native
	I1219 03:44:18.038966   44926 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1219 03:44:18.038979   44926 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:44:18.153101   44926 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-813136
	
	I1219 03:44:18.153128   44926 buildroot.go:166] provisioning hostname "pause-813136"
	I1219 03:44:18.156972   44926 main.go:144] libmachine: domain pause-813136 has defined MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:18.157462   44926 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:11:07:28", ip: ""} in network mk-pause-813136: {Iface:virbr2 ExpiryTime:2025-12-19 04:43:05 +0000 UTC Type:0 Mac:52:54:00:11:07:28 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:pause-813136 Clientid:01:52:54:00:11:07:28}
	I1219 03:44:18.157487   44926 main.go:144] libmachine: domain pause-813136 has defined IP address 192.168.50.162 and MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:18.157703   44926 main.go:144] libmachine: Using SSH client type: native
	I1219 03:44:18.157976   44926 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1219 03:44:18.157991   44926 main.go:144] libmachine: About to run SSH command:
	sudo hostname pause-813136 && echo "pause-813136" | sudo tee /etc/hostname
	I1219 03:44:18.288294   44926 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-813136
	
	I1219 03:44:18.291457   44926 main.go:144] libmachine: domain pause-813136 has defined MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:18.291941   44926 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:11:07:28", ip: ""} in network mk-pause-813136: {Iface:virbr2 ExpiryTime:2025-12-19 04:43:05 +0000 UTC Type:0 Mac:52:54:00:11:07:28 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:pause-813136 Clientid:01:52:54:00:11:07:28}
	I1219 03:44:18.291975   44926 main.go:144] libmachine: domain pause-813136 has defined IP address 192.168.50.162 and MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:18.292165   44926 main.go:144] libmachine: Using SSH client type: native
	I1219 03:44:18.292409   44926 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1219 03:44:18.292425   44926 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-813136' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-813136/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-813136' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:44:18.406229   44926 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:44:18.406266   44926 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5010/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5010/.minikube}
	I1219 03:44:18.406315   44926 buildroot.go:174] setting up certificates
	I1219 03:44:18.406329   44926 provision.go:84] configureAuth start
	I1219 03:44:18.409676   44926 main.go:144] libmachine: domain pause-813136 has defined MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:18.410107   44926 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:11:07:28", ip: ""} in network mk-pause-813136: {Iface:virbr2 ExpiryTime:2025-12-19 04:43:05 +0000 UTC Type:0 Mac:52:54:00:11:07:28 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:pause-813136 Clientid:01:52:54:00:11:07:28}
	I1219 03:44:18.410128   44926 main.go:144] libmachine: domain pause-813136 has defined IP address 192.168.50.162 and MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:18.412367   44926 main.go:144] libmachine: domain pause-813136 has defined MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:18.412743   44926 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:11:07:28", ip: ""} in network mk-pause-813136: {Iface:virbr2 ExpiryTime:2025-12-19 04:43:05 +0000 UTC Type:0 Mac:52:54:00:11:07:28 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:pause-813136 Clientid:01:52:54:00:11:07:28}
	I1219 03:44:18.412769   44926 main.go:144] libmachine: domain pause-813136 has defined IP address 192.168.50.162 and MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:18.412902   44926 provision.go:143] copyHostCerts
	I1219 03:44:18.412955   44926 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem, removing ...
	I1219 03:44:18.412975   44926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem
	I1219 03:44:18.413046   44926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem (1082 bytes)
	I1219 03:44:18.413181   44926 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem, removing ...
	I1219 03:44:18.413195   44926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem
	I1219 03:44:18.413230   44926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem (1123 bytes)
	I1219 03:44:18.413325   44926 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem, removing ...
	I1219 03:44:18.413336   44926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem
	I1219 03:44:18.413370   44926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem (1679 bytes)
	I1219 03:44:18.413455   44926 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem org=jenkins.pause-813136 san=[127.0.0.1 192.168.50.162 localhost minikube pause-813136]
	I1219 03:44:18.452814   44926 provision.go:177] copyRemoteCerts
	I1219 03:44:18.452866   44926 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:44:18.455667   44926 main.go:144] libmachine: domain pause-813136 has defined MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:18.456083   44926 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:11:07:28", ip: ""} in network mk-pause-813136: {Iface:virbr2 ExpiryTime:2025-12-19 04:43:05 +0000 UTC Type:0 Mac:52:54:00:11:07:28 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:pause-813136 Clientid:01:52:54:00:11:07:28}
	I1219 03:44:18.456106   44926 main.go:144] libmachine: domain pause-813136 has defined IP address 192.168.50.162 and MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:18.456303   44926 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/pause-813136/id_rsa Username:docker}
	I1219 03:44:18.541095   44926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 03:44:18.569596   44926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1219 03:44:18.599840   44926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:44:18.632363   44926 provision.go:87] duration metric: took 226.017714ms to configureAuth
	I1219 03:44:18.632388   44926 buildroot.go:189] setting minikube options for container-runtime
	I1219 03:44:18.632645   44926 config.go:182] Loaded profile config "pause-813136": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:44:18.635757   44926 main.go:144] libmachine: domain pause-813136 has defined MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:18.636211   44926 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:11:07:28", ip: ""} in network mk-pause-813136: {Iface:virbr2 ExpiryTime:2025-12-19 04:43:05 +0000 UTC Type:0 Mac:52:54:00:11:07:28 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:pause-813136 Clientid:01:52:54:00:11:07:28}
	I1219 03:44:18.636236   44926 main.go:144] libmachine: domain pause-813136 has defined IP address 192.168.50.162 and MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:18.636419   44926 main.go:144] libmachine: Using SSH client type: native
	I1219 03:44:18.636632   44926 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1219 03:44:18.636662   44926 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:44:24.166010   44926 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:44:24.166036   44926 machine.go:97] duration metric: took 6.131284273s to provisionDockerMachine
	I1219 03:44:24.166048   44926 start.go:293] postStartSetup for "pause-813136" (driver="kvm2")
	I1219 03:44:24.166057   44926 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:44:24.166139   44926 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:44:24.169217   44926 main.go:144] libmachine: domain pause-813136 has defined MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:24.169658   44926 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:11:07:28", ip: ""} in network mk-pause-813136: {Iface:virbr2 ExpiryTime:2025-12-19 04:43:05 +0000 UTC Type:0 Mac:52:54:00:11:07:28 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:pause-813136 Clientid:01:52:54:00:11:07:28}
	I1219 03:44:24.169689   44926 main.go:144] libmachine: domain pause-813136 has defined IP address 192.168.50.162 and MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:24.169862   44926 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/pause-813136/id_rsa Username:docker}
	I1219 03:44:24.254613   44926 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:44:24.259119   44926 info.go:137] Remote host: Buildroot 2025.02
	I1219 03:44:24.259140   44926 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/addons for local assets ...
	I1219 03:44:24.259219   44926 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/files for local assets ...
	I1219 03:44:24.259316   44926 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem -> 89372.pem in /etc/ssl/certs
	I1219 03:44:24.259417   44926 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:44:24.270590   44926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /etc/ssl/certs/89372.pem (1708 bytes)
	I1219 03:44:24.297619   44926 start.go:296] duration metric: took 131.558811ms for postStartSetup
	I1219 03:44:24.297650   44926 fix.go:56] duration metric: took 6.270624509s for fixHost
	I1219 03:44:24.300404   44926 main.go:144] libmachine: domain pause-813136 has defined MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:24.300820   44926 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:11:07:28", ip: ""} in network mk-pause-813136: {Iface:virbr2 ExpiryTime:2025-12-19 04:43:05 +0000 UTC Type:0 Mac:52:54:00:11:07:28 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:pause-813136 Clientid:01:52:54:00:11:07:28}
	I1219 03:44:24.300856   44926 main.go:144] libmachine: domain pause-813136 has defined IP address 192.168.50.162 and MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:24.301062   44926 main.go:144] libmachine: Using SSH client type: native
	I1219 03:44:24.301238   44926 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1219 03:44:24.301247   44926 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 03:44:24.406347   44926 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766115864.400759637
	
	I1219 03:44:24.406369   44926 fix.go:216] guest clock: 1766115864.400759637
	I1219 03:44:24.406378   44926 fix.go:229] Guest: 2025-12-19 03:44:24.400759637 +0000 UTC Remote: 2025-12-19 03:44:24.297654027 +0000 UTC m=+12.162671752 (delta=103.10561ms)
	I1219 03:44:24.406393   44926 fix.go:200] guest clock delta is within tolerance: 103.10561ms
	I1219 03:44:24.406397   44926 start.go:83] releasing machines lock for "pause-813136", held for 6.37939777s
	I1219 03:44:24.409894   44926 main.go:144] libmachine: domain pause-813136 has defined MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:24.410373   44926 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:11:07:28", ip: ""} in network mk-pause-813136: {Iface:virbr2 ExpiryTime:2025-12-19 04:43:05 +0000 UTC Type:0 Mac:52:54:00:11:07:28 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:pause-813136 Clientid:01:52:54:00:11:07:28}
	I1219 03:44:24.410421   44926 main.go:144] libmachine: domain pause-813136 has defined IP address 192.168.50.162 and MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:24.411037   44926 ssh_runner.go:195] Run: cat /version.json
	I1219 03:44:24.411106   44926 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:44:24.414467   44926 main.go:144] libmachine: domain pause-813136 has defined MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:24.414545   44926 main.go:144] libmachine: domain pause-813136 has defined MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:24.414931   44926 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:11:07:28", ip: ""} in network mk-pause-813136: {Iface:virbr2 ExpiryTime:2025-12-19 04:43:05 +0000 UTC Type:0 Mac:52:54:00:11:07:28 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:pause-813136 Clientid:01:52:54:00:11:07:28}
	I1219 03:44:24.414957   44926 main.go:144] libmachine: domain pause-813136 has defined IP address 192.168.50.162 and MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:24.415023   44926 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:11:07:28", ip: ""} in network mk-pause-813136: {Iface:virbr2 ExpiryTime:2025-12-19 04:43:05 +0000 UTC Type:0 Mac:52:54:00:11:07:28 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:pause-813136 Clientid:01:52:54:00:11:07:28}
	I1219 03:44:24.415056   44926 main.go:144] libmachine: domain pause-813136 has defined IP address 192.168.50.162 and MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:24.415124   44926 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/pause-813136/id_rsa Username:docker}
	I1219 03:44:24.415416   44926 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/pause-813136/id_rsa Username:docker}
	I1219 03:44:24.533008   44926 ssh_runner.go:195] Run: systemctl --version
	I1219 03:44:24.539818   44926 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:44:24.689886   44926 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:44:24.698694   44926 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:44:24.698762   44926 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:44:24.709667   44926 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:44:24.709694   44926 start.go:496] detecting cgroup driver to use...
	I1219 03:44:24.709765   44926 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:44:24.727347   44926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:44:24.744220   44926 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:44:24.744278   44926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:44:24.763745   44926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:44:24.779442   44926 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:44:24.953744   44926 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:44:25.131828   44926 docker.go:234] disabling docker service ...
	I1219 03:44:25.131898   44926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:44:25.159417   44926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:44:25.177196   44926 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:44:25.354724   44926 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:44:25.527240   44926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:44:25.542953   44926 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:44:25.567145   44926 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:44:25.567234   44926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:44:25.578700   44926 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1219 03:44:25.578752   44926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:44:25.591127   44926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:44:25.603551   44926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:44:25.617235   44926 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:44:25.634409   44926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:44:25.647204   44926 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:44:25.659803   44926 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:44:25.671682   44926 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:44:25.682541   44926 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:44:25.693639   44926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:44:25.857659   44926 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:44:26.087789   44926 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:44:26.087868   44926 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:44:26.093863   44926 start.go:564] Will wait 60s for crictl version
	I1219 03:44:26.093930   44926 ssh_runner.go:195] Run: which crictl
	I1219 03:44:26.098043   44926 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 03:44:26.129909   44926 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1219 03:44:26.129981   44926 ssh_runner.go:195] Run: crio --version
	I1219 03:44:26.156970   44926 ssh_runner.go:195] Run: crio --version
	I1219 03:44:26.185614   44926 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1219 03:44:26.189532   44926 main.go:144] libmachine: domain pause-813136 has defined MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:26.189992   44926 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:11:07:28", ip: ""} in network mk-pause-813136: {Iface:virbr2 ExpiryTime:2025-12-19 04:43:05 +0000 UTC Type:0 Mac:52:54:00:11:07:28 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:pause-813136 Clientid:01:52:54:00:11:07:28}
	I1219 03:44:26.190016   44926 main.go:144] libmachine: domain pause-813136 has defined IP address 192.168.50.162 and MAC address 52:54:00:11:07:28 in network mk-pause-813136
	I1219 03:44:26.190268   44926 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1219 03:44:26.195781   44926 kubeadm.go:884] updating cluster {Name:pause-813136 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3
ClusterName:pause-813136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.162 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:44:26.195909   44926 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:44:26.195961   44926 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:44:26.239454   44926 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:44:26.239474   44926 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:44:26.239516   44926 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:44:26.268165   44926 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:44:26.268187   44926 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:44:26.268196   44926 kubeadm.go:935] updating node { 192.168.50.162 8443 v1.34.3 crio true true} ...
	I1219 03:44:26.268301   44926 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-813136 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-813136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:44:26.268403   44926 ssh_runner.go:195] Run: crio config
	I1219 03:44:26.312041   44926 cni.go:84] Creating CNI manager for ""
	I1219 03:44:26.312070   44926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:44:26.312096   44926 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:44:26.312124   44926 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.162 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-813136 NodeName:pause-813136 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:44:26.312270   44926 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-813136"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.162"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.162"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:44:26.312348   44926 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:44:26.323982   44926 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:44:26.324066   44926 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:44:26.335786   44926 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1219 03:44:26.355772   44926 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:44:26.375063   44926 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1219 03:44:26.397561   44926 ssh_runner.go:195] Run: grep 192.168.50.162	control-plane.minikube.internal$ /etc/hosts
	I1219 03:44:26.403859   44926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:44:26.695503   44926 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:44:26.781036   44926 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/pause-813136 for IP: 192.168.50.162
	I1219 03:44:26.781059   44926 certs.go:195] generating shared ca certs ...
	I1219 03:44:26.781077   44926 certs.go:227] acquiring lock for ca certs: {Name:mk433b742ffd55233764aebe3c6f298e0d1f02ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:44:26.781261   44926 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key
	I1219 03:44:26.781383   44926 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key
	I1219 03:44:26.781413   44926 certs.go:257] generating profile certs ...
	I1219 03:44:26.781553   44926 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/pause-813136/client.key
	I1219 03:44:26.781685   44926 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/pause-813136/apiserver.key.6a27bf0c
	I1219 03:44:26.781762   44926 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/pause-813136/proxy-client.key
	I1219 03:44:26.781919   44926 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem (1338 bytes)
	W1219 03:44:26.781975   44926 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937_empty.pem, impossibly tiny 0 bytes
	I1219 03:44:26.781992   44926 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:44:26.782035   44926 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem (1082 bytes)
	I1219 03:44:26.782083   44926 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:44:26.782129   44926 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem (1679 bytes)
	I1219 03:44:26.782199   44926 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem (1708 bytes)
	I1219 03:44:26.783051   44926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:44:26.869678   44926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:44:26.936803   44926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:44:27.017765   44926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:44:27.077134   44926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/pause-813136/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1219 03:44:27.154758   44926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/pause-813136/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:44:27.262997   44926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/pause-813136/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:44:27.370616   44926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/pause-813136/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:44:27.492319   44926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /usr/share/ca-certificates/89372.pem (1708 bytes)
	I1219 03:44:27.623362   44926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:44:27.686597   44926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem --> /usr/share/ca-certificates/8937.pem (1338 bytes)
	I1219 03:44:27.745955   44926 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:44:27.779305   44926 ssh_runner.go:195] Run: openssl version
	I1219 03:44:27.789719   44926 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89372.pem
	I1219 03:44:27.808958   44926 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89372.pem /etc/ssl/certs/89372.pem
	I1219 03:44:27.827693   44926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89372.pem
	I1219 03:44:27.835860   44926 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:46 /usr/share/ca-certificates/89372.pem
	I1219 03:44:27.835919   44926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89372.pem
	I1219 03:44:27.846740   44926 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:44:27.866473   44926 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:44:27.886211   44926 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:44:27.917099   44926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:44:27.927743   44926 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:44:27.927803   44926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:44:27.943501   44926 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:44:27.983884   44926 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8937.pem
	I1219 03:44:28.019134   44926 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8937.pem /etc/ssl/certs/8937.pem
	I1219 03:44:28.046836   44926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8937.pem
	I1219 03:44:28.055709   44926 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:46 /usr/share/ca-certificates/8937.pem
	I1219 03:44:28.055767   44926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8937.pem
	I1219 03:44:28.064666   44926 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:44:28.087924   44926 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:44:28.105176   44926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:44:28.112920   44926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:44:28.127933   44926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:44:28.140722   44926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:44:28.150767   44926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:44:28.168556   44926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:44:28.181475   44926 kubeadm.go:401] StartCluster: {Name:pause-813136 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 Cl
usterName:pause-813136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.162 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:44:28.181671   44926 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:44:28.181726   44926 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:44:28.251920   44926 cri.go:92] found id: "e56805f91d0063dda29f58987a1a7dc2b0e77b097271529fa0b9163bf52ac142"
	I1219 03:44:28.251940   44926 cri.go:92] found id: "962f0a8a67335a88760ed2a1a0c705e78a09cf72bc575a1e326bbade3377b38f"
	I1219 03:44:28.251946   44926 cri.go:92] found id: "5ad4d5a9860531cb61c9a0e3b9d0bf2ff3b7f81b1ef30c9aa66047be00028f5e"
	I1219 03:44:28.251952   44926 cri.go:92] found id: "9ff9551c32b87b4e8e57071827c7b999a4d9dccf0cb90592e1b30972b339fc43"
	I1219 03:44:28.251956   44926 cri.go:92] found id: "18cd24c1b1176f2a47b1a9b256f8e63a5e52048661701cfcd864c2f888cd8444"
	I1219 03:44:28.251960   44926 cri.go:92] found id: "4fe030c193f1e8b9d4b9f05840eff806d4133ae4e14349cdc0bf44b641318844"
	I1219 03:44:28.251964   44926 cri.go:92] found id: "3a37e0ce0ac117a8ef4ea4682ec30651c7b8d66fca3b46e8e55fdeaf6298d2b0"
	I1219 03:44:28.251968   44926 cri.go:92] found id: "f14af7980e42c904cc0921003517dc037e1e2c23b24ad3dd23cd9f18519e2a0f"
	I1219 03:44:28.251971   44926 cri.go:92] found id: "512c060b1e78bc43dc12cf59d1ff2842e6cf0f9e7899ef605d976a984c113745"
	I1219 03:44:28.251994   44926 cri.go:92] found id: "6fc9d72b099af58bc9caaa263c65c266d259c56671803b54e42771b8000f6ed5"
	I1219 03:44:28.252001   44926 cri.go:92] found id: "87d5d032ff3ace706c919efc5a3201b49524de26ff6c1205cf14a14470b34091"
	I1219 03:44:28.252005   44926 cri.go:92] found id: "305e7cb17165545063a1a03cdaa217e96ba8e0fa951207af173e77a45e25e5f0"
	I1219 03:44:28.252010   44926 cri.go:92] found id: ""
	I1219 03:44:28.252060   44926 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-813136 -n pause-813136
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-813136 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-813136 logs -n 25: (1.116370796s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p running-upgrade-964792 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ running-upgrade-964792    │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │                     │
	│ delete  │ -p offline-crio-052125                                                                                                                                      │ offline-crio-052125       │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │ 19 Dec 25 03:42 UTC │
	│ start   │ -p pause-813136 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-813136              │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │ 19 Dec 25 03:44 UTC │
	│ start   │ -p kubernetes-upgrade-061737 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                             │ kubernetes-upgrade-061737 │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-061737 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                 │ kubernetes-upgrade-061737 │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │ 19 Dec 25 03:43 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-291901 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-291901    │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │                     │
	│ delete  │ -p stopped-upgrade-291901                                                                                                                                   │ stopped-upgrade-291901    │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │ 19 Dec 25 03:42 UTC │
	│ start   │ -p NoKubernetes-982841 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                 │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │                     │
	│ start   │ -p NoKubernetes-982841 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                         │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │ 19 Dec 25 03:43 UTC │
	│ delete  │ -p kubernetes-upgrade-061737                                                                                                                                │ kubernetes-upgrade-061737 │ jenkins │ v1.37.0 │ 19 Dec 25 03:43 UTC │ 19 Dec 25 03:43 UTC │
	│ start   │ -p force-systemd-flag-589340 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                   │ force-systemd-flag-589340 │ jenkins │ v1.37.0 │ 19 Dec 25 03:43 UTC │ 19 Dec 25 03:44 UTC │
	│ start   │ -p NoKubernetes-982841 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                         │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:43 UTC │ 19 Dec 25 03:43 UTC │
	│ delete  │ -p NoKubernetes-982841                                                                                                                                      │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │ 19 Dec 25 03:44 UTC │
	│ start   │ -p NoKubernetes-982841 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                         │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │ 19 Dec 25 03:44 UTC │
	│ start   │ -p pause-813136 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-813136              │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │ 19 Dec 25 03:45 UTC │
	│ ssh     │ force-systemd-flag-589340 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                        │ force-systemd-flag-589340 │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │ 19 Dec 25 03:44 UTC │
	│ delete  │ -p force-systemd-flag-589340                                                                                                                                │ force-systemd-flag-589340 │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │ 19 Dec 25 03:44 UTC │
	│ start   │ -p guest-783207 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                     │ guest-783207              │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │ 19 Dec 25 03:44 UTC │
	│ ssh     │ -p NoKubernetes-982841 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │                     │
	│ stop    │ -p NoKubernetes-982841                                                                                                                                      │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │ 19 Dec 25 03:44 UTC │
	│ start   │ -p NoKubernetes-982841 --driver=kvm2  --container-runtime=crio                                                                                              │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │ 19 Dec 25 03:45 UTC │
	│ start   │ -p force-systemd-env-919893 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                    │ force-systemd-env-919893  │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │                     │
	│ ssh     │ -p NoKubernetes-982841 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:45 UTC │                     │
	│ delete  │ -p NoKubernetes-982841                                                                                                                                      │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:45 UTC │ 19 Dec 25 03:45 UTC │
	│ start   │ -p cert-expiration-387964 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                        │ cert-expiration-387964    │ jenkins │ v1.37.0 │ 19 Dec 25 03:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:45:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:45:03.757173   45792 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:45:03.757467   45792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:45:03.757471   45792 out.go:374] Setting ErrFile to fd 2...
	I1219 03:45:03.757475   45792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:45:03.757797   45792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:45:03.758469   45792 out.go:368] Setting JSON to false
	I1219 03:45:03.759897   45792 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5248,"bootTime":1766110656,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:45:03.759960   45792 start.go:143] virtualization: kvm guest
	I1219 03:45:03.761956   45792 out.go:179] * [cert-expiration-387964] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:45:03.763197   45792 notify.go:221] Checking for updates...
	I1219 03:45:03.763204   45792 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:45:03.764319   45792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:45:03.765555   45792 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:45:03.766761   45792 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 03:45:03.767960   45792 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:45:03.769147   45792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:45:03.771054   45792 config.go:182] Loaded profile config "force-systemd-env-919893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:45:03.771190   45792 config.go:182] Loaded profile config "guest-783207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1219 03:45:03.771389   45792 config.go:182] Loaded profile config "pause-813136": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:45:03.771544   45792 config.go:182] Loaded profile config "running-upgrade-964792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1219 03:45:03.771693   45792 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:45:03.808022   45792 out.go:179] * Using the kvm2 driver based on user configuration
	I1219 03:45:03.809068   45792 start.go:309] selected driver: kvm2
	I1219 03:45:03.809092   45792 start.go:928] validating driver "kvm2" against <nil>
	I1219 03:45:03.809105   45792 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:45:03.810190   45792 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 03:45:03.810502   45792 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 03:45:03.810526   45792 cni.go:84] Creating CNI manager for ""
	I1219 03:45:03.810637   45792 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:45:03.810643   45792 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1219 03:45:03.810731   45792 start.go:353] cluster config:
	{Name:cert-expiration-387964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-387964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:45:03.810858   45792 iso.go:125] acquiring lock: {Name:mk42290af04a74a4dddf27fa33aac85c9bdccfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:45:03.812994   45792 out.go:179] * Starting "cert-expiration-387964" primary control-plane node in "cert-expiration-387964" cluster
	I1219 03:45:01.821635   43491 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8443/healthz ...
	I1219 03:45:01.822355   43491 api_server.go:269] stopped: https://192.168.72.51:8443/healthz: Get "https://192.168.72.51:8443/healthz": dial tcp 192.168.72.51:8443: connect: connection refused
	I1219 03:45:01.822414   43491 cri.go:57] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1219 03:45:01.822481   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1219 03:45:01.865544   43491 cri.go:92] found id: "585e3cce44d1126e40385c0604426be3495c3b57b7698e0363a5f1983d7e285b"
	I1219 03:45:01.865594   43491 cri.go:92] found id: ""
	I1219 03:45:01.865604   43491 logs.go:282] 1 containers: [585e3cce44d1126e40385c0604426be3495c3b57b7698e0363a5f1983d7e285b]
	I1219 03:45:01.865667   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:01.870867   43491 cri.go:57] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1219 03:45:01.870933   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1219 03:45:01.922887   43491 cri.go:92] found id: "53d800e1ee96bfba31e8f740b14b8da38d8dfab6afc39ed2f1d301b246a67fe7"
	I1219 03:45:01.922911   43491 cri.go:92] found id: ""
	I1219 03:45:01.922921   43491 logs.go:282] 1 containers: [53d800e1ee96bfba31e8f740b14b8da38d8dfab6afc39ed2f1d301b246a67fe7]
	I1219 03:45:01.922999   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:01.928942   43491 cri.go:57] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1219 03:45:01.929032   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1219 03:45:01.970638   43491 cri.go:92] found id: "b59158ce45f13fa82d44cfc35694a0a0243a43dd2b96405d95aa089a34e53d4f"
	I1219 03:45:01.970665   43491 cri.go:92] found id: ""
	I1219 03:45:01.970675   43491 logs.go:282] 1 containers: [b59158ce45f13fa82d44cfc35694a0a0243a43dd2b96405d95aa089a34e53d4f]
	I1219 03:45:01.970737   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:01.975227   43491 cri.go:57] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1219 03:45:01.975311   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1219 03:45:02.015880   43491 cri.go:92] found id: "5e17df7e58c54d28fb401e28a90bb4aa28fde085d64c01000b43817f26687c31"
	I1219 03:45:02.015899   43491 cri.go:92] found id: "26a5313abc2a12853d08d1fb5726fed4b6a0e0ce8cd2e3beb603e648d475d54f"
	I1219 03:45:02.015903   43491 cri.go:92] found id: ""
	I1219 03:45:02.015910   43491 logs.go:282] 2 containers: [5e17df7e58c54d28fb401e28a90bb4aa28fde085d64c01000b43817f26687c31 26a5313abc2a12853d08d1fb5726fed4b6a0e0ce8cd2e3beb603e648d475d54f]
	I1219 03:45:02.015956   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:02.020337   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:02.025869   43491 cri.go:57] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1219 03:45:02.025934   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1219 03:45:02.075479   43491 cri.go:92] found id: "0cb084af389a1bd0b5c3b71a9e44e94bba7195b499727a892a147b6da31605ed"
	I1219 03:45:02.075498   43491 cri.go:92] found id: ""
	I1219 03:45:02.075506   43491 logs.go:282] 1 containers: [0cb084af389a1bd0b5c3b71a9e44e94bba7195b499727a892a147b6da31605ed]
	I1219 03:45:02.075559   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:02.081619   43491 cri.go:57] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1219 03:45:02.081683   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1219 03:45:02.126610   43491 cri.go:92] found id: "ab75f962d8500daa81a9b0d01aef4ec2f6d74130562f0baec94aa1e5a804d4cf"
	I1219 03:45:02.126632   43491 cri.go:92] found id: "fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8"
	I1219 03:45:02.126638   43491 cri.go:92] found id: ""
	I1219 03:45:02.126648   43491 logs.go:282] 2 containers: [ab75f962d8500daa81a9b0d01aef4ec2f6d74130562f0baec94aa1e5a804d4cf fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8]
	I1219 03:45:02.126706   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:02.130907   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:02.134777   43491 cri.go:57] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1219 03:45:02.134843   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1219 03:45:02.173806   43491 cri.go:92] found id: ""
	I1219 03:45:02.173833   43491 logs.go:282] 0 containers: []
	W1219 03:45:02.173842   43491 logs.go:284] No container was found matching "kindnet"
	I1219 03:45:02.173851   43491 cri.go:57] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1219 03:45:02.173900   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1219 03:45:02.216085   43491 cri.go:92] found id: "7356291d08314af75a3b5d53d5f33844edd5a37751b81e5a248e5ffbf789859b"
	I1219 03:45:02.216115   43491 cri.go:92] found id: ""
	I1219 03:45:02.216124   43491 logs.go:282] 1 containers: [7356291d08314af75a3b5d53d5f33844edd5a37751b81e5a248e5ffbf789859b]
	I1219 03:45:02.216193   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:02.221816   43491 logs.go:123] Gathering logs for kubelet ...
	I1219 03:45:02.221852   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1219 03:45:02.262368   43491 logs.go:138] Found kubelet problem: Dec 19 03:42:55 running-upgrade-964792 kubelet[1231]: E1219 03:42:55.688316    1231 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:running-upgrade-964792\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-964792' and this object" logger="UnhandledError"
	W1219 03:45:02.262761   43491 logs.go:138] Found kubelet problem: Dec 19 03:42:55 running-upgrade-964792 kubelet[1231]: I1219 03:42:55.688356    1231 status_manager.go:890] "Failed to get status for pod" podUID="d9cef4eb74460bab8c6dbc95b8aae891" pod="kube-system/kube-controller-manager-running-upgrade-964792" err="pods \"kube-controller-manager-running-upgrade-964792\" is forbidden: User \"system:node:running-upgrade-964792\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-964792' and this object"
	W1219 03:45:02.263063   43491 logs.go:138] Found kubelet problem: Dec 19 03:42:55 running-upgrade-964792 kubelet[1231]: E1219 03:42:55.689571    1231 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:running-upgrade-964792\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-964792' and this object" logger="UnhandledError"
	W1219 03:45:02.263385   43491 logs.go:138] Found kubelet problem: Dec 19 03:42:55 running-upgrade-964792 kubelet[1231]: I1219 03:42:55.708997    1231 status_manager.go:890] "Failed to get status for pod" podUID="fbdafa3f-5bac-464e-842f-c9e50c0a0447" pod="kube-system/storage-provisioner" err="pods \"storage-provisioner\" is forbidden: User \"system:node:running-upgrade-964792\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-964792' and this object"
	I1219 03:45:02.338204   43491 logs.go:123] Gathering logs for dmesg ...
	I1219 03:45:02.338242   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1219 03:45:02.360027   43491 logs.go:123] Gathering logs for kube-apiserver [585e3cce44d1126e40385c0604426be3495c3b57b7698e0363a5f1983d7e285b] ...
	I1219 03:45:02.360058   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585e3cce44d1126e40385c0604426be3495c3b57b7698e0363a5f1983d7e285b"
	I1219 03:45:02.407438   43491 logs.go:123] Gathering logs for etcd [53d800e1ee96bfba31e8f740b14b8da38d8dfab6afc39ed2f1d301b246a67fe7] ...
	I1219 03:45:02.407472   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53d800e1ee96bfba31e8f740b14b8da38d8dfab6afc39ed2f1d301b246a67fe7"
	I1219 03:45:02.473724   43491 logs.go:123] Gathering logs for kube-scheduler [5e17df7e58c54d28fb401e28a90bb4aa28fde085d64c01000b43817f26687c31] ...
	I1219 03:45:02.473768   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e17df7e58c54d28fb401e28a90bb4aa28fde085d64c01000b43817f26687c31"
	I1219 03:45:02.555160   43491 logs.go:123] Gathering logs for kube-controller-manager [ab75f962d8500daa81a9b0d01aef4ec2f6d74130562f0baec94aa1e5a804d4cf] ...
	I1219 03:45:02.555200   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab75f962d8500daa81a9b0d01aef4ec2f6d74130562f0baec94aa1e5a804d4cf"
	I1219 03:45:02.602502   43491 logs.go:123] Gathering logs for CRI-O ...
	I1219 03:45:02.602544   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1219 03:45:02.992489   43491 logs.go:123] Gathering logs for describe nodes ...
	I1219 03:45:02.992515   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1219 03:45:03.085268   43491 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1219 03:45:03.085294   43491 logs.go:123] Gathering logs for coredns [b59158ce45f13fa82d44cfc35694a0a0243a43dd2b96405d95aa089a34e53d4f] ...
	I1219 03:45:03.085309   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b59158ce45f13fa82d44cfc35694a0a0243a43dd2b96405d95aa089a34e53d4f"
	I1219 03:45:03.131371   43491 logs.go:123] Gathering logs for kube-scheduler [26a5313abc2a12853d08d1fb5726fed4b6a0e0ce8cd2e3beb603e648d475d54f] ...
	I1219 03:45:03.131408   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26a5313abc2a12853d08d1fb5726fed4b6a0e0ce8cd2e3beb603e648d475d54f"
	I1219 03:45:03.180266   43491 logs.go:123] Gathering logs for kube-proxy [0cb084af389a1bd0b5c3b71a9e44e94bba7195b499727a892a147b6da31605ed] ...
	I1219 03:45:03.180314   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0cb084af389a1bd0b5c3b71a9e44e94bba7195b499727a892a147b6da31605ed"
	I1219 03:45:03.228279   43491 logs.go:123] Gathering logs for kube-controller-manager [fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8] ...
	I1219 03:45:03.228313   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8"
	W1219 03:45:03.275050   43491 logs.go:130] failed kube-controller-manager [fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8": Process exited with status 1
	stdout:
	
	stderr:
	E1219 03:45:03.256951    4284 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8\": container with ID starting with fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8 not found: ID does not exist" containerID="fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8"
	time="2025-12-19T03:45:03Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8\": container with ID starting with fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1219 03:45:03.256951    4284 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8\": container with ID starting with fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8 not found: ID does not exist" containerID="fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8"
	time="2025-12-19T03:45:03Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8\": container with ID starting with fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8 not found: ID does not exist"
	
	** /stderr **
	I1219 03:45:03.275078   43491 logs.go:123] Gathering logs for storage-provisioner [7356291d08314af75a3b5d53d5f33844edd5a37751b81e5a248e5ffbf789859b] ...
	I1219 03:45:03.275095   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7356291d08314af75a3b5d53d5f33844edd5a37751b81e5a248e5ffbf789859b"
	I1219 03:45:03.325803   43491 logs.go:123] Gathering logs for container status ...
	I1219 03:45:03.325836   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1219 03:45:03.384850   43491 out.go:374] Setting ErrFile to fd 2...
	I1219 03:45:03.384886   43491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1219 03:45:03.384959   43491 out.go:285] X Problems detected in kubelet:
	W1219 03:45:03.384980   43491 out.go:285]   Dec 19 03:42:55 running-upgrade-964792 kubelet[1231]: E1219 03:42:55.688316    1231 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:running-upgrade-964792\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-964792' and this object" logger="UnhandledError"
	W1219 03:45:03.384993   43491 out.go:285]   Dec 19 03:42:55 running-upgrade-964792 kubelet[1231]: I1219 03:42:55.688356    1231 status_manager.go:890] "Failed to get status for pod" podUID="d9cef4eb74460bab8c6dbc95b8aae891" pod="kube-system/kube-controller-manager-running-upgrade-964792" err="pods \"kube-controller-manager-running-upgrade-964792\" is forbidden: User \"system:node:running-upgrade-964792\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-964792' and this object"
	W1219 03:45:03.385002   43491 out.go:285]   Dec 19 03:42:55 running-upgrade-964792 kubelet[1231]: E1219 03:42:55.689571    1231 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:running-upgrade-964792\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-964792' and this object" logger="UnhandledError"
	W1219 03:45:03.385012   43491 out.go:285]   Dec 19 03:42:55 running-upgrade-964792 kubelet[1231]: I1219 03:42:55.708997    1231 status_manager.go:890] "Failed to get status for pod" podUID="fbdafa3f-5bac-464e-842f-c9e50c0a0447" pod="kube-system/storage-provisioner" err="pods \"storage-provisioner\" is forbidden: User \"system:node:running-upgrade-964792\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-964792' and this object"
	I1219 03:45:03.385021   43491 out.go:374] Setting ErrFile to fd 2...
	I1219 03:45:03.385035   43491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:45:02.116668   45567 main.go:144] libmachine: waiting for domain to start...
	I1219 03:45:02.118226   45567 main.go:144] libmachine: domain is now running
	I1219 03:45:02.118248   45567 main.go:144] libmachine: waiting for IP...
	I1219 03:45:02.119137   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:02.119760   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:02.119776   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:02.120140   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:02.120178   45567 retry.go:31] will retry after 220.853068ms: waiting for domain to come up
	I1219 03:45:02.342791   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:02.343664   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:02.343687   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:02.344173   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:02.344213   45567 retry.go:31] will retry after 338.10356ms: waiting for domain to come up
	I1219 03:45:02.683968   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:02.684758   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:02.684778   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:02.685236   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:02.685284   45567 retry.go:31] will retry after 380.595104ms: waiting for domain to come up
	I1219 03:45:03.069585   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:03.263637   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:03.263656   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:03.264437   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:03.264484   45567 retry.go:31] will retry after 392.428668ms: waiting for domain to come up
	I1219 03:45:03.658477   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:03.659398   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:03.659421   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:03.660084   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:03.660140   45567 retry.go:31] will retry after 528.805797ms: waiting for domain to come up
	I1219 03:45:04.190923   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:04.191702   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:04.191719   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:04.192106   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:04.192143   45567 retry.go:31] will retry after 874.305615ms: waiting for domain to come up
	I1219 03:45:05.068138   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:05.068894   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:05.068917   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:05.069373   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:05.069408   45567 retry.go:31] will retry after 1.039844515s: waiting for domain to come up
	I1219 03:45:06.110557   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:06.111240   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:06.111258   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:06.111645   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:06.111677   45567 retry.go:31] will retry after 900.852655ms: waiting for domain to come up
	I1219 03:45:07.014605   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:07.015232   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:07.015248   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:07.015634   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:07.015667   45567 retry.go:31] will retry after 1.625422219s: waiting for domain to come up
	I1219 03:45:03.292952   44926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:45:03.293765   44926 addons.go:546] duration metric: took 4.54868ms for enable addons: enabled=[]
	I1219 03:45:03.529635   44926 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:45:03.558489   44926 node_ready.go:35] waiting up to 6m0s for node "pause-813136" to be "Ready" ...
	I1219 03:45:03.562709   44926 node_ready.go:49] node "pause-813136" is "Ready"
	I1219 03:45:03.562738   44926 node_ready.go:38] duration metric: took 4.209733ms for node "pause-813136" to be "Ready" ...
	I1219 03:45:03.562752   44926 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:45:03.562805   44926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:45:03.599465   44926 api_server.go:72] duration metric: took 310.673933ms to wait for apiserver process to appear ...
	I1219 03:45:03.599498   44926 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:45:03.599520   44926 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1219 03:45:03.607925   44926 api_server.go:279] https://192.168.50.162:8443/healthz returned 200:
	ok
	I1219 03:45:03.609182   44926 api_server.go:141] control plane version: v1.34.3
	I1219 03:45:03.609204   44926 api_server.go:131] duration metric: took 9.699245ms to wait for apiserver health ...
	I1219 03:45:03.609214   44926 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:45:03.612197   44926 system_pods.go:59] 6 kube-system pods found
	I1219 03:45:03.612227   44926 system_pods.go:61] "coredns-66bc5c9577-7b6qk" [99badcb6-8825-498e-a57c-e34b1ae19d49] Running
	I1219 03:45:03.612240   44926 system_pods.go:61] "etcd-pause-813136" [e3038183-c5cc-4bdb-82c9-9f74195972cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:45:03.612254   44926 system_pods.go:61] "kube-apiserver-pause-813136" [223ca060-74b0-44c7-b091-b40cad860b31] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:45:03.612282   44926 system_pods.go:61] "kube-controller-manager-pause-813136" [892f5cf0-f3c5-426d-af78-0fbb445e241b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:45:03.612291   44926 system_pods.go:61] "kube-proxy-kxqfm" [12a1b329-3734-4dfb-be1c-6f3ba324c031] Running
	I1219 03:45:03.612301   44926 system_pods.go:61] "kube-scheduler-pause-813136" [85497637-0b2a-468e-b3ce-0c7810cda572] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:45:03.612311   44926 system_pods.go:74] duration metric: took 3.08879ms to wait for pod list to return data ...
	I1219 03:45:03.612324   44926 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:45:03.614937   44926 default_sa.go:45] found service account: "default"
	I1219 03:45:03.614959   44926 default_sa.go:55] duration metric: took 2.627057ms for default service account to be created ...
	I1219 03:45:03.614970   44926 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:45:03.617864   44926 system_pods.go:86] 6 kube-system pods found
	I1219 03:45:03.617892   44926 system_pods.go:89] "coredns-66bc5c9577-7b6qk" [99badcb6-8825-498e-a57c-e34b1ae19d49] Running
	I1219 03:45:03.617910   44926 system_pods.go:89] "etcd-pause-813136" [e3038183-c5cc-4bdb-82c9-9f74195972cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:45:03.617920   44926 system_pods.go:89] "kube-apiserver-pause-813136" [223ca060-74b0-44c7-b091-b40cad860b31] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:45:03.617936   44926 system_pods.go:89] "kube-controller-manager-pause-813136" [892f5cf0-f3c5-426d-af78-0fbb445e241b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:45:03.617942   44926 system_pods.go:89] "kube-proxy-kxqfm" [12a1b329-3734-4dfb-be1c-6f3ba324c031] Running
	I1219 03:45:03.617949   44926 system_pods.go:89] "kube-scheduler-pause-813136" [85497637-0b2a-468e-b3ce-0c7810cda572] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:45:03.617957   44926 system_pods.go:126] duration metric: took 2.981371ms to wait for k8s-apps to be running ...
	I1219 03:45:03.617966   44926 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:45:03.618009   44926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:45:03.646461   44926 system_svc.go:56] duration metric: took 28.486559ms WaitForService to wait for kubelet
	I1219 03:45:03.646490   44926 kubeadm.go:587] duration metric: took 357.703864ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:45:03.646509   44926 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:45:03.649192   44926 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:45:03.649212   44926 node_conditions.go:123] node cpu capacity is 2
	I1219 03:45:03.649225   44926 node_conditions.go:105] duration metric: took 2.710306ms to run NodePressure ...
	I1219 03:45:03.649245   44926 start.go:242] waiting for startup goroutines ...
	I1219 03:45:03.649255   44926 start.go:247] waiting for cluster config update ...
	I1219 03:45:03.649266   44926 start.go:256] writing updated cluster config ...
	I1219 03:45:03.649615   44926 ssh_runner.go:195] Run: rm -f paused
	I1219 03:45:03.659545   44926 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:45:03.660331   44926 kapi.go:59] client config for pause-813136: &rest.Config{Host:"https://192.168.50.162:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/profiles/pause-813136/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/profiles/pause-813136/client.key", CAFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1219 03:45:03.665153   44926 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7b6qk" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:03.670719   44926 pod_ready.go:94] pod "coredns-66bc5c9577-7b6qk" is "Ready"
	I1219 03:45:03.670743   44926 pod_ready.go:86] duration metric: took 5.565934ms for pod "coredns-66bc5c9577-7b6qk" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:03.673083   44926 pod_ready.go:83] waiting for pod "etcd-pause-813136" in "kube-system" namespace to be "Ready" or be gone ...
	W1219 03:45:05.680393   44926 pod_ready.go:104] pod "etcd-pause-813136" is not "Ready", error: <nil>
	I1219 03:45:03.813998   45792 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:45:03.814018   45792 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:45:03.814023   45792 cache.go:65] Caching tarball of preloaded images
	I1219 03:45:03.814087   45792 preload.go:238] Found /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:45:03.814093   45792 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:45:03.814164   45792 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/cert-expiration-387964/config.json ...
	I1219 03:45:03.814180   45792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/cert-expiration-387964/config.json: {Name:mk6a2e3f6adc3f0e8b157acf7d31946b1636bce1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:45:03.814340   45792 start.go:360] acquireMachinesLock for cert-expiration-387964: {Name:mk229398d29442d4d52885bbac963e5004bfbfba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:45:08.643455   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:08.644076   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:08.644096   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:08.644547   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:08.644603   45567 retry.go:31] will retry after 1.942522307s: waiting for domain to come up
	I1219 03:45:10.588702   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:10.589377   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:10.589393   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:10.589940   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:10.589980   45567 retry.go:31] will retry after 2.278647694s: waiting for domain to come up
	W1219 03:45:07.680746   44926 pod_ready.go:104] pod "etcd-pause-813136" is not "Ready", error: <nil>
	W1219 03:45:10.183213   44926 pod_ready.go:104] pod "etcd-pause-813136" is not "Ready", error: <nil>
	W1219 03:45:12.678726   44926 pod_ready.go:104] pod "etcd-pause-813136" is not "Ready", error: <nil>
	W1219 03:45:14.679344   44926 pod_ready.go:104] pod "etcd-pause-813136" is not "Ready", error: <nil>
	I1219 03:45:15.178776   44926 pod_ready.go:94] pod "etcd-pause-813136" is "Ready"
	I1219 03:45:15.178800   44926 pod_ready.go:86] duration metric: took 11.505698713s for pod "etcd-pause-813136" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.181512   44926 pod_ready.go:83] waiting for pod "kube-apiserver-pause-813136" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.185065   44926 pod_ready.go:94] pod "kube-apiserver-pause-813136" is "Ready"
	I1219 03:45:15.185082   44926 pod_ready.go:86] duration metric: took 3.553393ms for pod "kube-apiserver-pause-813136" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.188507   44926 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-813136" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.192337   44926 pod_ready.go:94] pod "kube-controller-manager-pause-813136" is "Ready"
	I1219 03:45:15.192354   44926 pod_ready.go:86] duration metric: took 3.832239ms for pod "kube-controller-manager-pause-813136" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.193945   44926 pod_ready.go:83] waiting for pod "kube-proxy-kxqfm" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.376868   44926 pod_ready.go:94] pod "kube-proxy-kxqfm" is "Ready"
	I1219 03:45:15.376894   44926 pod_ready.go:86] duration metric: took 182.927624ms for pod "kube-proxy-kxqfm" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.577249   44926 pod_ready.go:83] waiting for pod "kube-scheduler-pause-813136" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.977537   44926 pod_ready.go:94] pod "kube-scheduler-pause-813136" is "Ready"
	I1219 03:45:15.977585   44926 pod_ready.go:86] duration metric: took 400.283285ms for pod "kube-scheduler-pause-813136" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.977607   44926 pod_ready.go:40] duration metric: took 12.31801015s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:45:16.022052   44926 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:45:16.023909   44926 out.go:179] * Done! kubectl is now configured to use "pause-813136" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.567871941Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766115916567852336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6987365-aefb-47d4-b075-0fc6498c54bc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.568801995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87b4f230-f62a-4ae2-9eba-d4c3ed5dd903 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.568852392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87b4f230-f62a-4ae2-9eba-d4c3ed5dd903 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.569072266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e3b73cba10f1de7e59c5357ad7e6a8855562c9e28088a1f26770e5c2a5c6b10,PodSandboxId:420fab7432d7a415708548ba6a95a89f0781c322e959973f31aac572152a9bfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766115898322374472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81427598da6ffeede2148b7ada3bb803,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f475fc955ac4750ebbe4b3f8c05722a7b60f12916f2a2f3ee30fab6baf28977,PodSandboxId:3c099d483dc21a6a65340493acc982e1ffbb20e3446daed5ce050c4ae86907e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766115898296252330,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59437dc8a57155b1725300f728d58a45,},Annot
ations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abc639ac06f44884351929f22ff79292ad918b0c774486fdc7c323c07395f3,PodSandboxId:32a265f1506cea89cd1c34f047a22965e4fb9999957e1fa798ddd140c2716a55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766115898286432338,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-813136
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9014fff5f634f186b58a0042a2acb36f,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49975ac9ed6143031b48b7ae6cc7d360646e2658e828592bb24fcd876c6d6e64,PodSandboxId:944e3ad7c03797e8f6bdbbfaf57cfb60768e2cad815fea67aed03ec0f0da3a81,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766115898263700136,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b323c3c08c8f8cb968a4a58c7bcdd2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:561fa3e63a07ee523e11e2e89b5973a0193cc158c8eeb0229d903ba3518d5308,PodSandboxId:5b7fd3fafdf1a5e4e0079600a8e7bca3db2114896e424cbc0b7b8611c0dc60ec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Sta
te:CONTAINER_RUNNING,CreatedAt:1766115890062262717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxqfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a1b329-3734-4dfb-be1c-6f3ba324c031,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405eaa8dfbfd9068d752d97691deba809cbdf064d5f44c9b6c0ff2099dd2ab72,PodSandboxId:119c95865e5ce292e89b9c0365a614c958a42b8061f518c58672018745b72391,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17661
15888666790317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7b6qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99badcb6-8825-498e-a57c-e34b1ae19d49,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56805f91d0063dda29f58987a1a7dc2b0e77b097271529fa0b9163bf52ac142,PodSandboxId:119c95865e5ce292e89b9c0365a614c958a42b8061f518
c58672018745b72391,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766115867986072659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7b6qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99badcb6-8825-498e-a57c-e34b1ae19d49,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962f0a8a67335a88760ed2a1a0c705e78a09cf72bc575a1e326bbade3377b38f,PodSandboxId:5b7fd3fafdf1a5e4e0079600a8e7bca3db2114896e424cbc0b7b8611c0dc60ec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766115867099785526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxqfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a1b329-3734-4dfb-be1c-6f3ba324c031,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad4d5a9860531cb61c9a0e3b9d0bf2ff3b7f81b1ef30c9aa66047be00028f5e,PodSandboxId:944e3ad7c03797e8f6bdbbfaf57cfb60768e2cad815fea67aed03ec0f0da3a81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766115867085193269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b323c3c08c8f8cb968a4a58c7bcdd2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff9551c32b87b4e8e57071827c7b999a4d9dccf0cb90592e1b30972b339fc43,PodSandboxId:3c099d483dc21a6a65340493acc982e1ffbb20e3446daed5ce050c4ae86907e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766115867050538295,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59437dc8a57155b1725300f728d58a45,},Annotations:map[string]string{io.kubernetes.container.hash:
20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cd24c1b1176f2a47b1a9b256f8e63a5e52048661701cfcd864c2f888cd8444,PodSandboxId:420fab7432d7a415708548ba6a95a89f0781c322e959973f31aac572152a9bfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766115866999261778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-813136,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 81427598da6ffeede2148b7ada3bb803,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe030c193f1e8b9d4b9f05840eff806d4133ae4e14349cdc0bf44b641318844,PodSandboxId:32a265f1506cea89cd1c34f047a22965e4fb9999957e1fa798ddd140c2716a55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766115866983961320,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9014fff5f634f186b58a0042a2acb36f,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87b4f230-f62a-4ae2-9eba-d4c3ed5dd903 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.605568102Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=17ba3aea-5421-4992-9965-ff4535a50883 name=/runtime.v1.RuntimeService/Version
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.605627811Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=17ba3aea-5421-4992-9965-ff4535a50883 name=/runtime.v1.RuntimeService/Version
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.606822087Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac4b6c34-c954-475e-b4d7-08dc754e78ba name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.607384203Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766115916607364194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac4b6c34-c954-475e-b4d7-08dc754e78ba name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.608621356Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c16db028-c262-4ca1-b313-1c82873f5112 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.608729225Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c16db028-c262-4ca1-b313-1c82873f5112 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.609690650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e3b73cba10f1de7e59c5357ad7e6a8855562c9e28088a1f26770e5c2a5c6b10,PodSandboxId:420fab7432d7a415708548ba6a95a89f0781c322e959973f31aac572152a9bfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766115898322374472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81427598da6ffeede2148b7ada3bb803,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f475fc955ac4750ebbe4b3f8c05722a7b60f12916f2a2f3ee30fab6baf28977,PodSandboxId:3c099d483dc21a6a65340493acc982e1ffbb20e3446daed5ce050c4ae86907e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766115898296252330,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59437dc8a57155b1725300f728d58a45,},Annot
ations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abc639ac06f44884351929f22ff79292ad918b0c774486fdc7c323c07395f3,PodSandboxId:32a265f1506cea89cd1c34f047a22965e4fb9999957e1fa798ddd140c2716a55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766115898286432338,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-813136
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9014fff5f634f186b58a0042a2acb36f,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49975ac9ed6143031b48b7ae6cc7d360646e2658e828592bb24fcd876c6d6e64,PodSandboxId:944e3ad7c03797e8f6bdbbfaf57cfb60768e2cad815fea67aed03ec0f0da3a81,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766115898263700136,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b323c3c08c8f8cb968a4a58c7bcdd2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:561fa3e63a07ee523e11e2e89b5973a0193cc158c8eeb0229d903ba3518d5308,PodSandboxId:5b7fd3fafdf1a5e4e0079600a8e7bca3db2114896e424cbc0b7b8611c0dc60ec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Sta
te:CONTAINER_RUNNING,CreatedAt:1766115890062262717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxqfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a1b329-3734-4dfb-be1c-6f3ba324c031,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405eaa8dfbfd9068d752d97691deba809cbdf064d5f44c9b6c0ff2099dd2ab72,PodSandboxId:119c95865e5ce292e89b9c0365a614c958a42b8061f518c58672018745b72391,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17661
15888666790317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7b6qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99badcb6-8825-498e-a57c-e34b1ae19d49,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56805f91d0063dda29f58987a1a7dc2b0e77b097271529fa0b9163bf52ac142,PodSandboxId:119c95865e5ce292e89b9c0365a614c958a42b8061f518
c58672018745b72391,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766115867986072659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7b6qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99badcb6-8825-498e-a57c-e34b1ae19d49,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962f0a8a67335a88760ed2a1a0c705e78a09cf72bc575a1e326bbade3377b38f,PodSandboxId:5b7fd3fafdf1a5e4e0079600a8e7bca3db2114896e424cbc0b7b8611c0dc60ec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766115867099785526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxqfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a1b329-3734-4dfb-be1c-6f3ba324c031,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad4d5a9860531cb61c9a0e3b9d0bf2ff3b7f81b1ef30c9aa66047be00028f5e,PodSandboxId:944e3ad7c03797e8f6bdbbfaf57cfb60768e2cad815fea67aed03ec0f0da3a81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766115867085193269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b323c3c08c8f8cb968a4a58c7bcdd2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff9551c32b87b4e8e57071827c7b999a4d9dccf0cb90592e1b30972b339fc43,PodSandboxId:3c099d483dc21a6a65340493acc982e1ffbb20e3446daed5ce050c4ae86907e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766115867050538295,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59437dc8a57155b1725300f728d58a45,},Annotations:map[string]string{io.kubernetes.container.hash:
20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cd24c1b1176f2a47b1a9b256f8e63a5e52048661701cfcd864c2f888cd8444,PodSandboxId:420fab7432d7a415708548ba6a95a89f0781c322e959973f31aac572152a9bfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766115866999261778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-813136,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 81427598da6ffeede2148b7ada3bb803,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe030c193f1e8b9d4b9f05840eff806d4133ae4e14349cdc0bf44b641318844,PodSandboxId:32a265f1506cea89cd1c34f047a22965e4fb9999957e1fa798ddd140c2716a55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766115866983961320,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9014fff5f634f186b58a0042a2acb36f,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c16db028-c262-4ca1-b313-1c82873f5112 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.648983122Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a64fecf-2db3-4cef-bf40-b716c7f70e18 name=/runtime.v1.RuntimeService/Version
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.649063575Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a64fecf-2db3-4cef-bf40-b716c7f70e18 name=/runtime.v1.RuntimeService/Version
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.650004507Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0149120a-76d2-4465-827e-88e3c9efebca name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.650464526Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766115916650438585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0149120a-76d2-4465-827e-88e3c9efebca name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.651470631Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b58b5cf-e2ec-439f-b5f6-8aaf9dc823c9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.651535193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b58b5cf-e2ec-439f-b5f6-8aaf9dc823c9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.652394105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e3b73cba10f1de7e59c5357ad7e6a8855562c9e28088a1f26770e5c2a5c6b10,PodSandboxId:420fab7432d7a415708548ba6a95a89f0781c322e959973f31aac572152a9bfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766115898322374472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81427598da6ffeede2148b7ada3bb803,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f475fc955ac4750ebbe4b3f8c05722a7b60f12916f2a2f3ee30fab6baf28977,PodSandboxId:3c099d483dc21a6a65340493acc982e1ffbb20e3446daed5ce050c4ae86907e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766115898296252330,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59437dc8a57155b1725300f728d58a45,},Annot
ations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abc639ac06f44884351929f22ff79292ad918b0c774486fdc7c323c07395f3,PodSandboxId:32a265f1506cea89cd1c34f047a22965e4fb9999957e1fa798ddd140c2716a55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766115898286432338,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-813136
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9014fff5f634f186b58a0042a2acb36f,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49975ac9ed6143031b48b7ae6cc7d360646e2658e828592bb24fcd876c6d6e64,PodSandboxId:944e3ad7c03797e8f6bdbbfaf57cfb60768e2cad815fea67aed03ec0f0da3a81,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766115898263700136,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b323c3c08c8f8cb968a4a58c7bcdd2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:561fa3e63a07ee523e11e2e89b5973a0193cc158c8eeb0229d903ba3518d5308,PodSandboxId:5b7fd3fafdf1a5e4e0079600a8e7bca3db2114896e424cbc0b7b8611c0dc60ec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Sta
te:CONTAINER_RUNNING,CreatedAt:1766115890062262717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxqfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a1b329-3734-4dfb-be1c-6f3ba324c031,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405eaa8dfbfd9068d752d97691deba809cbdf064d5f44c9b6c0ff2099dd2ab72,PodSandboxId:119c95865e5ce292e89b9c0365a614c958a42b8061f518c58672018745b72391,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17661
15888666790317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7b6qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99badcb6-8825-498e-a57c-e34b1ae19d49,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56805f91d0063dda29f58987a1a7dc2b0e77b097271529fa0b9163bf52ac142,PodSandboxId:119c95865e5ce292e89b9c0365a614c958a42b8061f518
c58672018745b72391,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766115867986072659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7b6qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99badcb6-8825-498e-a57c-e34b1ae19d49,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962f0a8a67335a88760ed2a1a0c705e78a09cf72bc575a1e326bbade3377b38f,PodSandboxId:5b7fd3fafdf1a5e4e0079600a8e7bca3db2114896e424cbc0b7b8611c0dc60ec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766115867099785526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxqfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a1b329-3734-4dfb-be1c-6f3ba324c031,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad4d5a9860531cb61c9a0e3b9d0bf2ff3b7f81b1ef30c9aa66047be00028f5e,PodSandboxId:944e3ad7c03797e8f6bdbbfaf57cfb60768e2cad815fea67aed03ec0f0da3a81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766115867085193269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b323c3c08c8f8cb968a4a58c7bcdd2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff9551c32b87b4e8e57071827c7b999a4d9dccf0cb90592e1b30972b339fc43,PodSandboxId:3c099d483dc21a6a65340493acc982e1ffbb20e3446daed5ce050c4ae86907e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766115867050538295,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59437dc8a57155b1725300f728d58a45,},Annotations:map[string]string{io.kubernetes.container.hash:
20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cd24c1b1176f2a47b1a9b256f8e63a5e52048661701cfcd864c2f888cd8444,PodSandboxId:420fab7432d7a415708548ba6a95a89f0781c322e959973f31aac572152a9bfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766115866999261778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-813136,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 81427598da6ffeede2148b7ada3bb803,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe030c193f1e8b9d4b9f05840eff806d4133ae4e14349cdc0bf44b641318844,PodSandboxId:32a265f1506cea89cd1c34f047a22965e4fb9999957e1fa798ddd140c2716a55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766115866983961320,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9014fff5f634f186b58a0042a2acb36f,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b58b5cf-e2ec-439f-b5f6-8aaf9dc823c9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.693467516Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c996aee-6d43-4ef9-8db7-b5073902f3c2 name=/runtime.v1.RuntimeService/Version
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.693548167Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c996aee-6d43-4ef9-8db7-b5073902f3c2 name=/runtime.v1.RuntimeService/Version
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.694820285Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33afe764-b0d2-4799-8e9e-85b013fe4967 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.695319211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766115916695294917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33afe764-b0d2-4799-8e9e-85b013fe4967 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.696027586Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94dde7ba-1efa-4fe1-880a-d4db9dfa41db name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.696077019Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94dde7ba-1efa-4fe1-880a-d4db9dfa41db name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:16 pause-813136 crio[2828]: time="2025-12-19 03:45:16.696420039Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e3b73cba10f1de7e59c5357ad7e6a8855562c9e28088a1f26770e5c2a5c6b10,PodSandboxId:420fab7432d7a415708548ba6a95a89f0781c322e959973f31aac572152a9bfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766115898322374472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81427598da6ffeede2148b7ada3bb803,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f475fc955ac4750ebbe4b3f8c05722a7b60f12916f2a2f3ee30fab6baf28977,PodSandboxId:3c099d483dc21a6a65340493acc982e1ffbb20e3446daed5ce050c4ae86907e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766115898296252330,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59437dc8a57155b1725300f728d58a45,},Annot
ations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abc639ac06f44884351929f22ff79292ad918b0c774486fdc7c323c07395f3,PodSandboxId:32a265f1506cea89cd1c34f047a22965e4fb9999957e1fa798ddd140c2716a55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766115898286432338,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-813136
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9014fff5f634f186b58a0042a2acb36f,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49975ac9ed6143031b48b7ae6cc7d360646e2658e828592bb24fcd876c6d6e64,PodSandboxId:944e3ad7c03797e8f6bdbbfaf57cfb60768e2cad815fea67aed03ec0f0da3a81,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766115898263700136,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b323c3c08c8f8cb968a4a58c7bcdd2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:561fa3e63a07ee523e11e2e89b5973a0193cc158c8eeb0229d903ba3518d5308,PodSandboxId:5b7fd3fafdf1a5e4e0079600a8e7bca3db2114896e424cbc0b7b8611c0dc60ec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Sta
te:CONTAINER_RUNNING,CreatedAt:1766115890062262717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxqfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a1b329-3734-4dfb-be1c-6f3ba324c031,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405eaa8dfbfd9068d752d97691deba809cbdf064d5f44c9b6c0ff2099dd2ab72,PodSandboxId:119c95865e5ce292e89b9c0365a614c958a42b8061f518c58672018745b72391,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17661
15888666790317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7b6qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99badcb6-8825-498e-a57c-e34b1ae19d49,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56805f91d0063dda29f58987a1a7dc2b0e77b097271529fa0b9163bf52ac142,PodSandboxId:119c95865e5ce292e89b9c0365a614c958a42b8061f518
c58672018745b72391,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766115867986072659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7b6qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99badcb6-8825-498e-a57c-e34b1ae19d49,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962f0a8a67335a88760ed2a1a0c705e78a09cf72bc575a1e326bbade3377b38f,PodSandboxId:5b7fd3fafdf1a5e4e0079600a8e7bca3db2114896e424cbc0b7b8611c0dc60ec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766115867099785526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxqfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a1b329-3734-4dfb-be1c-6f3ba324c031,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad4d5a9860531cb61c9a0e3b9d0bf2ff3b7f81b1ef30c9aa66047be00028f5e,PodSandboxId:944e3ad7c03797e8f6bdbbfaf57cfb60768e2cad815fea67aed03ec0f0da3a81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766115867085193269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b323c3c08c8f8cb968a4a58c7bcdd2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff9551c32b87b4e8e57071827c7b999a4d9dccf0cb90592e1b30972b339fc43,PodSandboxId:3c099d483dc21a6a65340493acc982e1ffbb20e3446daed5ce050c4ae86907e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766115867050538295,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59437dc8a57155b1725300f728d58a45,},Annotations:map[string]string{io.kubernetes.container.hash:
20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cd24c1b1176f2a47b1a9b256f8e63a5e52048661701cfcd864c2f888cd8444,PodSandboxId:420fab7432d7a415708548ba6a95a89f0781c322e959973f31aac572152a9bfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766115866999261778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-813136,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 81427598da6ffeede2148b7ada3bb803,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe030c193f1e8b9d4b9f05840eff806d4133ae4e14349cdc0bf44b641318844,PodSandboxId:32a265f1506cea89cd1c34f047a22965e4fb9999957e1fa798ddd140c2716a55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766115866983961320,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9014fff5f634f186b58a0042a2acb36f,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94dde7ba-1efa-4fe1-880a-d4db9dfa41db name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0e3b73cba10f1       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   18 seconds ago      Running             kube-controller-manager   2                   420fab7432d7a       kube-controller-manager-pause-813136   kube-system
	9f475fc955ac4       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   18 seconds ago      Running             kube-scheduler            2                   3c099d483dc21       kube-scheduler-pause-813136            kube-system
	49abc639ac06f       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   18 seconds ago      Running             kube-apiserver            2                   32a265f1506ce       kube-apiserver-pause-813136            kube-system
	49975ac9ed614       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   18 seconds ago      Running             etcd                      2                   944e3ad7c0379       etcd-pause-813136                      kube-system
	561fa3e63a07e       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   26 seconds ago      Running             kube-proxy                2                   5b7fd3fafdf1a       kube-proxy-kxqfm                       kube-system
	405eaa8dfbfd9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   28 seconds ago      Running             coredns                   2                   119c95865e5ce       coredns-66bc5c9577-7b6qk               kube-system
	e56805f91d006       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   48 seconds ago      Exited              coredns                   1                   119c95865e5ce       coredns-66bc5c9577-7b6qk               kube-system
	962f0a8a67335       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   49 seconds ago      Exited              kube-proxy                1                   5b7fd3fafdf1a       kube-proxy-kxqfm                       kube-system
	5ad4d5a986053       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   49 seconds ago      Exited              etcd                      1                   944e3ad7c0379       etcd-pause-813136                      kube-system
	9ff9551c32b87       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   49 seconds ago      Exited              kube-scheduler            1                   3c099d483dc21       kube-scheduler-pause-813136            kube-system
	18cd24c1b1176       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   49 seconds ago      Exited              kube-controller-manager   1                   420fab7432d7a       kube-controller-manager-pause-813136   kube-system
	4fe030c193f1e       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   49 seconds ago      Exited              kube-apiserver            1                   32a265f1506ce       kube-apiserver-pause-813136            kube-system
	
	
	==> coredns [405eaa8dfbfd9068d752d97691deba809cbdf064d5f44c9b6c0ff2099dd2ab72] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60179 - 49965 "HINFO IN 634547219968973713.595630748506419850. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.074980332s
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [e56805f91d0063dda29f58987a1a7dc2b0e77b097271529fa0b9163bf52ac142] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:34071 - 8539 "HINFO IN 6655959907211568805.2874153447990880425. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026333941s
	
	
	==> describe nodes <==
	Name:               pause-813136
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-813136
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=pause-813136
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_43_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:43:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-813136
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:45:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:45:01 +0000   Fri, 19 Dec 2025 03:43:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:45:01 +0000   Fri, 19 Dec 2025 03:43:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:45:01 +0000   Fri, 19 Dec 2025 03:43:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:45:01 +0000   Fri, 19 Dec 2025 03:43:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.162
	  Hostname:    pause-813136
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 a3118eb3fc0f42039cb96b373705a0ee
	  System UUID:                a3118eb3-fc0f-4203-9cb9-6b373705a0ee
	  Boot ID:                    42f1e44e-9222-404e-aa9c-3ad45b425eff
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-7b6qk                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     106s
	  kube-system                 etcd-pause-813136                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         111s
	  kube-system                 kube-apiserver-pause-813136             250m (12%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-pause-813136    200m (10%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-kxqfm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-pause-813136             100m (5%)     0 (0%)      0 (0%)           0 (0%)         111s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 105s                 kube-proxy       
	  Normal   Starting                 26s                  kube-proxy       
	  Normal   Starting                 46s                  kube-proxy       
	  Normal   NodeHasSufficientPID     119s (x7 over 119s)  kubelet          Node pause-813136 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    119s (x8 over 119s)  kubelet          Node pause-813136 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  119s (x8 over 119s)  kubelet          Node pause-813136 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  119s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 112s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  111s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  111s                 kubelet          Node pause-813136 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    111s                 kubelet          Node pause-813136 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     111s                 kubelet          Node pause-813136 status is now: NodeHasSufficientPID
	  Normal   NodeReady                110s                 kubelet          Node pause-813136 status is now: NodeReady
	  Normal   RegisteredNode           107s                 node-controller  Node pause-813136 event: Registered Node pause-813136 in Controller
	  Warning  ContainerGCFailed        51s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           42s                  node-controller  Node pause-813136 event: Registered Node pause-813136 in Controller
	  Normal   Starting                 19s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  19s (x8 over 19s)    kubelet          Node pause-813136 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 19s)    kubelet          Node pause-813136 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19s (x7 over 19s)    kubelet          Node pause-813136 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12s                  node-controller  Node pause-813136 event: Registered Node pause-813136 in Controller
	
	
	==> dmesg <==
	[Dec19 03:42] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000049] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec19 03:43] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.161158] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083944] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.111911] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.150433] kauditd_printk_skb: 171 callbacks suppressed
	[  +6.003343] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.041261] kauditd_printk_skb: 219 callbacks suppressed
	[Dec19 03:44] kauditd_printk_skb: 38 callbacks suppressed
	[  +4.115386] kauditd_printk_skb: 319 callbacks suppressed
	[  +4.385570] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.151668] kauditd_printk_skb: 29 callbacks suppressed
	[Dec19 03:45] kauditd_printk_skb: 79 callbacks suppressed
	
	
	==> etcd [49975ac9ed6143031b48b7ae6cc7d360646e2658e828592bb24fcd876c6d6e64] <==
	{"level":"warn","ts":"2025-12-19T03:45:00.286045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.316051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.343111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.359102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.369638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.374932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.387532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.396016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.408272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.420385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.432529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.443363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.461934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.472934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.484777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.508187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.515442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.527061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.541084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.557190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.560055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.582203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.595959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.608337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.712282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35260","server-name":"","error":"EOF"}
	
	
	==> etcd [5ad4d5a9860531cb61c9a0e3b9d0bf2ff3b7f81b1ef30c9aa66047be00028f5e] <==
	{"level":"warn","ts":"2025-12-19T03:44:29.925347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:44:29.935914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:44:29.953722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:44:29.967399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:44:29.981806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:44:30.004875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:44:30.050493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55496","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:44:38.677439Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-19T03:44:38.677650Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-813136","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.162:2380"],"advertise-client-urls":["https://192.168.50.162:2379"]}
	{"level":"error","ts":"2025-12-19T03:44:38.677877Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T03:44:45.683100Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T03:44:45.688106Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T03:44:45.688778Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"25a84ac227828bb5","current-leader-member-id":"25a84ac227828bb5"}
	{"level":"info","ts":"2025-12-19T03:44:45.689383Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-19T03:44:45.689456Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-19T03:44:45.689546Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T03:44:45.690167Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T03:44:45.690189Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-19T03:44:45.689621Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.162:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T03:44:45.690207Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.162:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T03:44:45.690215Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.162:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T03:44:45.695766Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.162:2380"}
	{"level":"error","ts":"2025-12-19T03:44:45.696070Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.162:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T03:44:45.696180Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.162:2380"}
	{"level":"info","ts":"2025-12-19T03:44:45.696314Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-813136","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.162:2380"],"advertise-client-urls":["https://192.168.50.162:2379"]}
	
	
	==> kernel <==
	 03:45:17 up 2 min,  0 users,  load average: 1.21, 0.45, 0.16
	Linux pause-813136 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [49abc639ac06f44884351929f22ff79292ad918b0c774486fdc7c323c07395f3] <==
	I1219 03:45:01.482570       1 autoregister_controller.go:144] Starting autoregister controller
	I1219 03:45:01.482576       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1219 03:45:01.537166       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:45:01.541240       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1219 03:45:01.541274       1 policy_source.go:240] refreshing policies
	I1219 03:45:01.556420       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1219 03:45:01.562313       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1219 03:45:01.563088       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1219 03:45:01.566656       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1219 03:45:01.566684       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1219 03:45:01.571485       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1219 03:45:01.572807       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1219 03:45:01.574096       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1219 03:45:01.575371       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1219 03:45:01.606825       1 cache.go:39] Caches are synced for autoregister controller
	I1219 03:45:01.615417       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1219 03:45:01.904840       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1219 03:45:02.372748       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1219 03:45:03.118602       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:45:03.164044       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:45:03.201355       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:45:03.209514       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:45:04.893200       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:45:05.191557       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:45:05.289195       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [4fe030c193f1e8b9d4b9f05840eff806d4133ae4e14349cdc0bf44b641318844] <==
	W1219 03:44:54.779235       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:54.801641       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:54.922430       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:54.989437       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.003054       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.026311       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.064913       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.069818       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.115320       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.139217       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.154931       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.159505       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.174408       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.286264       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.301910       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.329398       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.342197       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.535189       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.537636       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.606791       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.704776       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.721439       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.796928       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.873906       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.984064       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0e3b73cba10f1de7e59c5357ad7e6a8855562c9e28088a1f26770e5c2a5c6b10] <==
	I1219 03:45:04.886091       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1219 03:45:04.886316       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 03:45:04.885362       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1219 03:45:04.887179       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1219 03:45:04.885370       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1219 03:45:04.889210       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1219 03:45:04.890288       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1219 03:45:04.892189       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1219 03:45:04.893316       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 03:45:04.894219       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1219 03:45:04.894391       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 03:45:04.895984       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1219 03:45:04.896596       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1219 03:45:04.898497       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:45:04.900375       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1219 03:45:04.901041       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1219 03:45:04.903689       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:45:04.906957       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 03:45:04.920187       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1219 03:45:04.922553       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1219 03:45:04.926836       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1219 03:45:04.931099       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1219 03:45:04.935843       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 03:45:04.935878       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1219 03:45:04.938720       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [18cd24c1b1176f2a47b1a9b256f8e63a5e52048661701cfcd864c2f888cd8444] <==
	I1219 03:44:34.057709       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:44:34.057950       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 03:44:34.058477       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 03:44:34.062918       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 03:44:34.067823       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:44:34.069640       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1219 03:44:34.071900       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1219 03:44:34.074146       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:44:34.080367       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:44:34.080392       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 03:44:34.080398       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1219 03:44:34.080831       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:44:34.085967       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1219 03:44:34.088168       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1219 03:44:34.090412       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1219 03:44:34.105903       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 03:44:34.105915       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1219 03:44:34.106054       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1219 03:44:34.106092       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1219 03:44:34.106356       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1219 03:44:34.106644       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 03:44:34.106742       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-813136"
	I1219 03:44:34.106858       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1219 03:44:34.107608       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 03:44:34.109159       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [561fa3e63a07ee523e11e2e89b5973a0193cc158c8eeb0229d903ba3518d5308] <==
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:44:50.338989       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:44:50.339018       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:44:50.347258       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:44:50.347500       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:44:50.347524       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:44:50.351382       1 config.go:200] "Starting service config controller"
	I1219 03:44:50.351505       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:44:50.351587       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:44:50.351593       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:44:50.351615       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:44:50.351618       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:44:50.353664       1 config.go:309] "Starting node config controller"
	I1219 03:44:50.353702       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:44:50.353778       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:44:50.451646       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:44:50.451666       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:44:50.451648       1 shared_informer.go:356] "Caches are synced" controller="service config"
	E1219 03:44:56.106366       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": unexpected EOF"
	E1219 03:45:01.493200       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1219 03:45:01.493532       1 reflector.go:205] "Failed to watch" err="nodes \"pause-813136\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 03:45:01.493826       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 03:45:01.494252       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	
	
	==> kube-proxy [962f0a8a67335a88760ed2a1a0c705e78a09cf72bc575a1e326bbade3377b38f] <==
	I1219 03:44:29.295625       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:44:30.798186       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:44:30.798225       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.162"]
	E1219 03:44:30.798316       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:44:30.901604       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:44:30.901682       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:44:30.901704       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:44:30.912886       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:44:30.913234       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:44:30.913330       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:44:30.914925       1 config.go:200] "Starting service config controller"
	I1219 03:44:30.914965       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:44:30.915049       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:44:30.915068       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:44:30.915187       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:44:30.915210       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:44:30.916928       1 config.go:309] "Starting node config controller"
	I1219 03:44:30.916952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:44:30.916963       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:44:31.015898       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:44:31.015904       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:44:31.015935       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9f475fc955ac4750ebbe4b3f8c05722a7b60f12916f2a2f3ee30fab6baf28977] <==
	I1219 03:45:00.244986       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:45:01.480849       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:45:01.480885       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:45:01.480896       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:45:01.480902       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:45:01.536566       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:45:01.536713       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:45:01.539636       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:45:01.539684       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:45:01.541100       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:45:01.541349       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:45:01.640444       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [9ff9551c32b87b4e8e57071827c7b999a4d9dccf0cb90592e1b30972b339fc43] <==
	I1219 03:44:28.648758       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:44:30.655063       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:44:30.655102       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:44:30.655155       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:44:30.655165       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:44:30.750564       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:44:30.750798       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:44:30.755830       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:44:30.755868       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:44:30.756072       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:44:30.756212       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:44:30.856314       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:44:45.770353       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1219 03:44:45.770484       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1219 03:44:45.770590       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1219 03:44:45.770793       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:44:45.771002       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1219 03:44:45.771206       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 19 03:44:59 pause-813136 kubelet[4144]: E1219 03:44:59.981170    4144 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-813136\" not found" node="pause-813136"
	Dec 19 03:45:00 pause-813136 kubelet[4144]: E1219 03:45:00.981537    4144 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-813136\" not found" node="pause-813136"
	Dec 19 03:45:00 pause-813136 kubelet[4144]: E1219 03:45:00.982061    4144 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-813136\" not found" node="pause-813136"
	Dec 19 03:45:00 pause-813136 kubelet[4144]: E1219 03:45:00.982370    4144 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-813136\" not found" node="pause-813136"
	Dec 19 03:45:00 pause-813136 kubelet[4144]: E1219 03:45:00.983596    4144 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-813136\" not found" node="pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.523248    4144 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.607703    4144 kubelet_node_status.go:124] "Node was previously registered" node="pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.607816    4144 kubelet_node_status.go:78] "Successfully registered node" node="pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.607873    4144 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.609854    4144 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: E1219 03:45:01.646672    4144 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-813136\" already exists" pod="kube-system/etcd-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.646714    4144 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: E1219 03:45:01.658764    4144 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-813136\" already exists" pod="kube-system/kube-apiserver-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.658804    4144 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: E1219 03:45:01.671188    4144 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-813136\" already exists" pod="kube-system/kube-controller-manager-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.671216    4144 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: E1219 03:45:01.682707    4144 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-813136\" already exists" pod="kube-system/kube-scheduler-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.794162    4144 apiserver.go:52] "Watching apiserver"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.825696    4144 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.901580    4144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12a1b329-3734-4dfb-be1c-6f3ba324c031-xtables-lock\") pod \"kube-proxy-kxqfm\" (UID: \"12a1b329-3734-4dfb-be1c-6f3ba324c031\") " pod="kube-system/kube-proxy-kxqfm"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.901705    4144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12a1b329-3734-4dfb-be1c-6f3ba324c031-lib-modules\") pod \"kube-proxy-kxqfm\" (UID: \"12a1b329-3734-4dfb-be1c-6f3ba324c031\") " pod="kube-system/kube-proxy-kxqfm"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.984606    4144 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: E1219 03:45:01.995560    4144 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-813136\" already exists" pod="kube-system/kube-apiserver-pause-813136"
	Dec 19 03:45:07 pause-813136 kubelet[4144]: E1219 03:45:07.941198    4144 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766115907940870517  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 19 03:45:07 pause-813136 kubelet[4144]: E1219 03:45:07.941235    4144 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766115907940870517  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-813136 -n pause-813136
helpers_test.go:270: (dbg) Run:  kubectl --context pause-813136 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-813136 -n pause-813136
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-813136 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-813136 logs -n 25: (1.087161195s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p running-upgrade-964792 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ running-upgrade-964792    │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │                     │
	│ delete  │ -p offline-crio-052125                                                                                                                                      │ offline-crio-052125       │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │ 19 Dec 25 03:42 UTC │
	│ start   │ -p pause-813136 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-813136              │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │ 19 Dec 25 03:44 UTC │
	│ start   │ -p kubernetes-upgrade-061737 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                             │ kubernetes-upgrade-061737 │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-061737 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                 │ kubernetes-upgrade-061737 │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │ 19 Dec 25 03:43 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-291901 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-291901    │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │                     │
	│ delete  │ -p stopped-upgrade-291901                                                                                                                                   │ stopped-upgrade-291901    │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │ 19 Dec 25 03:42 UTC │
	│ start   │ -p NoKubernetes-982841 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                 │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │                     │
	│ start   │ -p NoKubernetes-982841 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                         │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:42 UTC │ 19 Dec 25 03:43 UTC │
	│ delete  │ -p kubernetes-upgrade-061737                                                                                                                                │ kubernetes-upgrade-061737 │ jenkins │ v1.37.0 │ 19 Dec 25 03:43 UTC │ 19 Dec 25 03:43 UTC │
	│ start   │ -p force-systemd-flag-589340 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                   │ force-systemd-flag-589340 │ jenkins │ v1.37.0 │ 19 Dec 25 03:43 UTC │ 19 Dec 25 03:44 UTC │
	│ start   │ -p NoKubernetes-982841 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                         │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:43 UTC │ 19 Dec 25 03:43 UTC │
	│ delete  │ -p NoKubernetes-982841                                                                                                                                      │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │ 19 Dec 25 03:44 UTC │
	│ start   │ -p NoKubernetes-982841 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                         │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │ 19 Dec 25 03:44 UTC │
	│ start   │ -p pause-813136 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-813136              │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │ 19 Dec 25 03:45 UTC │
	│ ssh     │ force-systemd-flag-589340 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                        │ force-systemd-flag-589340 │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │ 19 Dec 25 03:44 UTC │
	│ delete  │ -p force-systemd-flag-589340                                                                                                                                │ force-systemd-flag-589340 │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │ 19 Dec 25 03:44 UTC │
	│ start   │ -p guest-783207 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                     │ guest-783207              │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │ 19 Dec 25 03:44 UTC │
	│ ssh     │ -p NoKubernetes-982841 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │                     │
	│ stop    │ -p NoKubernetes-982841                                                                                                                                      │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │ 19 Dec 25 03:44 UTC │
	│ start   │ -p NoKubernetes-982841 --driver=kvm2  --container-runtime=crio                                                                                              │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │ 19 Dec 25 03:45 UTC │
	│ start   │ -p force-systemd-env-919893 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                    │ force-systemd-env-919893  │ jenkins │ v1.37.0 │ 19 Dec 25 03:44 UTC │                     │
	│ ssh     │ -p NoKubernetes-982841 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:45 UTC │                     │
	│ delete  │ -p NoKubernetes-982841                                                                                                                                      │ NoKubernetes-982841       │ jenkins │ v1.37.0 │ 19 Dec 25 03:45 UTC │ 19 Dec 25 03:45 UTC │
	│ start   │ -p cert-expiration-387964 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                        │ cert-expiration-387964    │ jenkins │ v1.37.0 │ 19 Dec 25 03:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:45:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:45:03.757173   45792 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:45:03.757467   45792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:45:03.757471   45792 out.go:374] Setting ErrFile to fd 2...
	I1219 03:45:03.757475   45792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:45:03.757797   45792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:45:03.758469   45792 out.go:368] Setting JSON to false
	I1219 03:45:03.759897   45792 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5248,"bootTime":1766110656,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:45:03.759960   45792 start.go:143] virtualization: kvm guest
	I1219 03:45:03.761956   45792 out.go:179] * [cert-expiration-387964] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:45:03.763197   45792 notify.go:221] Checking for updates...
	I1219 03:45:03.763204   45792 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:45:03.764319   45792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:45:03.765555   45792 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:45:03.766761   45792 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 03:45:03.767960   45792 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:45:03.769147   45792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:45:03.771054   45792 config.go:182] Loaded profile config "force-systemd-env-919893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:45:03.771190   45792 config.go:182] Loaded profile config "guest-783207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1219 03:45:03.771389   45792 config.go:182] Loaded profile config "pause-813136": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:45:03.771544   45792 config.go:182] Loaded profile config "running-upgrade-964792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1219 03:45:03.771693   45792 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:45:03.808022   45792 out.go:179] * Using the kvm2 driver based on user configuration
	I1219 03:45:03.809068   45792 start.go:309] selected driver: kvm2
	I1219 03:45:03.809092   45792 start.go:928] validating driver "kvm2" against <nil>
	I1219 03:45:03.809105   45792 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:45:03.810190   45792 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 03:45:03.810502   45792 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 03:45:03.810526   45792 cni.go:84] Creating CNI manager for ""
	I1219 03:45:03.810637   45792 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:45:03.810643   45792 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1219 03:45:03.810731   45792 start.go:353] cluster config:
	{Name:cert-expiration-387964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-387964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:45:03.810858   45792 iso.go:125] acquiring lock: {Name:mk42290af04a74a4dddf27fa33aac85c9bdccfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:45:03.812994   45792 out.go:179] * Starting "cert-expiration-387964" primary control-plane node in "cert-expiration-387964" cluster
	I1219 03:45:01.821635   43491 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8443/healthz ...
	I1219 03:45:01.822355   43491 api_server.go:269] stopped: https://192.168.72.51:8443/healthz: Get "https://192.168.72.51:8443/healthz": dial tcp 192.168.72.51:8443: connect: connection refused
	I1219 03:45:01.822414   43491 cri.go:57] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1219 03:45:01.822481   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1219 03:45:01.865544   43491 cri.go:92] found id: "585e3cce44d1126e40385c0604426be3495c3b57b7698e0363a5f1983d7e285b"
	I1219 03:45:01.865594   43491 cri.go:92] found id: ""
	I1219 03:45:01.865604   43491 logs.go:282] 1 containers: [585e3cce44d1126e40385c0604426be3495c3b57b7698e0363a5f1983d7e285b]
	I1219 03:45:01.865667   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:01.870867   43491 cri.go:57] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1219 03:45:01.870933   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1219 03:45:01.922887   43491 cri.go:92] found id: "53d800e1ee96bfba31e8f740b14b8da38d8dfab6afc39ed2f1d301b246a67fe7"
	I1219 03:45:01.922911   43491 cri.go:92] found id: ""
	I1219 03:45:01.922921   43491 logs.go:282] 1 containers: [53d800e1ee96bfba31e8f740b14b8da38d8dfab6afc39ed2f1d301b246a67fe7]
	I1219 03:45:01.922999   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:01.928942   43491 cri.go:57] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1219 03:45:01.929032   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1219 03:45:01.970638   43491 cri.go:92] found id: "b59158ce45f13fa82d44cfc35694a0a0243a43dd2b96405d95aa089a34e53d4f"
	I1219 03:45:01.970665   43491 cri.go:92] found id: ""
	I1219 03:45:01.970675   43491 logs.go:282] 1 containers: [b59158ce45f13fa82d44cfc35694a0a0243a43dd2b96405d95aa089a34e53d4f]
	I1219 03:45:01.970737   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:01.975227   43491 cri.go:57] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1219 03:45:01.975311   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1219 03:45:02.015880   43491 cri.go:92] found id: "5e17df7e58c54d28fb401e28a90bb4aa28fde085d64c01000b43817f26687c31"
	I1219 03:45:02.015899   43491 cri.go:92] found id: "26a5313abc2a12853d08d1fb5726fed4b6a0e0ce8cd2e3beb603e648d475d54f"
	I1219 03:45:02.015903   43491 cri.go:92] found id: ""
	I1219 03:45:02.015910   43491 logs.go:282] 2 containers: [5e17df7e58c54d28fb401e28a90bb4aa28fde085d64c01000b43817f26687c31 26a5313abc2a12853d08d1fb5726fed4b6a0e0ce8cd2e3beb603e648d475d54f]
	I1219 03:45:02.015956   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:02.020337   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:02.025869   43491 cri.go:57] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1219 03:45:02.025934   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1219 03:45:02.075479   43491 cri.go:92] found id: "0cb084af389a1bd0b5c3b71a9e44e94bba7195b499727a892a147b6da31605ed"
	I1219 03:45:02.075498   43491 cri.go:92] found id: ""
	I1219 03:45:02.075506   43491 logs.go:282] 1 containers: [0cb084af389a1bd0b5c3b71a9e44e94bba7195b499727a892a147b6da31605ed]
	I1219 03:45:02.075559   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:02.081619   43491 cri.go:57] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1219 03:45:02.081683   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1219 03:45:02.126610   43491 cri.go:92] found id: "ab75f962d8500daa81a9b0d01aef4ec2f6d74130562f0baec94aa1e5a804d4cf"
	I1219 03:45:02.126632   43491 cri.go:92] found id: "fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8"
	I1219 03:45:02.126638   43491 cri.go:92] found id: ""
	I1219 03:45:02.126648   43491 logs.go:282] 2 containers: [ab75f962d8500daa81a9b0d01aef4ec2f6d74130562f0baec94aa1e5a804d4cf fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8]
	I1219 03:45:02.126706   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:02.130907   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:02.134777   43491 cri.go:57] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1219 03:45:02.134843   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1219 03:45:02.173806   43491 cri.go:92] found id: ""
	I1219 03:45:02.173833   43491 logs.go:282] 0 containers: []
	W1219 03:45:02.173842   43491 logs.go:284] No container was found matching "kindnet"
	I1219 03:45:02.173851   43491 cri.go:57] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1219 03:45:02.173900   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1219 03:45:02.216085   43491 cri.go:92] found id: "7356291d08314af75a3b5d53d5f33844edd5a37751b81e5a248e5ffbf789859b"
	I1219 03:45:02.216115   43491 cri.go:92] found id: ""
	I1219 03:45:02.216124   43491 logs.go:282] 1 containers: [7356291d08314af75a3b5d53d5f33844edd5a37751b81e5a248e5ffbf789859b]
	I1219 03:45:02.216193   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:02.221816   43491 logs.go:123] Gathering logs for kubelet ...
	I1219 03:45:02.221852   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1219 03:45:02.262368   43491 logs.go:138] Found kubelet problem: Dec 19 03:42:55 running-upgrade-964792 kubelet[1231]: E1219 03:42:55.688316    1231 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:running-upgrade-964792\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-964792' and this object" logger="UnhandledError"
	W1219 03:45:02.262761   43491 logs.go:138] Found kubelet problem: Dec 19 03:42:55 running-upgrade-964792 kubelet[1231]: I1219 03:42:55.688356    1231 status_manager.go:890] "Failed to get status for pod" podUID="d9cef4eb74460bab8c6dbc95b8aae891" pod="kube-system/kube-controller-manager-running-upgrade-964792" err="pods \"kube-controller-manager-running-upgrade-964792\" is forbidden: User \"system:node:running-upgrade-964792\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-964792' and this object"
	W1219 03:45:02.263063   43491 logs.go:138] Found kubelet problem: Dec 19 03:42:55 running-upgrade-964792 kubelet[1231]: E1219 03:42:55.689571    1231 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:running-upgrade-964792\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-964792' and this object" logger="UnhandledError"
	W1219 03:45:02.263385   43491 logs.go:138] Found kubelet problem: Dec 19 03:42:55 running-upgrade-964792 kubelet[1231]: I1219 03:42:55.708997    1231 status_manager.go:890] "Failed to get status for pod" podUID="fbdafa3f-5bac-464e-842f-c9e50c0a0447" pod="kube-system/storage-provisioner" err="pods \"storage-provisioner\" is forbidden: User \"system:node:running-upgrade-964792\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-964792' and this object"
	I1219 03:45:02.338204   43491 logs.go:123] Gathering logs for dmesg ...
	I1219 03:45:02.338242   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1219 03:45:02.360027   43491 logs.go:123] Gathering logs for kube-apiserver [585e3cce44d1126e40385c0604426be3495c3b57b7698e0363a5f1983d7e285b] ...
	I1219 03:45:02.360058   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585e3cce44d1126e40385c0604426be3495c3b57b7698e0363a5f1983d7e285b"
	I1219 03:45:02.407438   43491 logs.go:123] Gathering logs for etcd [53d800e1ee96bfba31e8f740b14b8da38d8dfab6afc39ed2f1d301b246a67fe7] ...
	I1219 03:45:02.407472   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53d800e1ee96bfba31e8f740b14b8da38d8dfab6afc39ed2f1d301b246a67fe7"
	I1219 03:45:02.473724   43491 logs.go:123] Gathering logs for kube-scheduler [5e17df7e58c54d28fb401e28a90bb4aa28fde085d64c01000b43817f26687c31] ...
	I1219 03:45:02.473768   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e17df7e58c54d28fb401e28a90bb4aa28fde085d64c01000b43817f26687c31"
	I1219 03:45:02.555160   43491 logs.go:123] Gathering logs for kube-controller-manager [ab75f962d8500daa81a9b0d01aef4ec2f6d74130562f0baec94aa1e5a804d4cf] ...
	I1219 03:45:02.555200   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab75f962d8500daa81a9b0d01aef4ec2f6d74130562f0baec94aa1e5a804d4cf"
	I1219 03:45:02.602502   43491 logs.go:123] Gathering logs for CRI-O ...
	I1219 03:45:02.602544   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1219 03:45:02.992489   43491 logs.go:123] Gathering logs for describe nodes ...
	I1219 03:45:02.992515   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1219 03:45:03.085268   43491 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1219 03:45:03.085294   43491 logs.go:123] Gathering logs for coredns [b59158ce45f13fa82d44cfc35694a0a0243a43dd2b96405d95aa089a34e53d4f] ...
	I1219 03:45:03.085309   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b59158ce45f13fa82d44cfc35694a0a0243a43dd2b96405d95aa089a34e53d4f"
	I1219 03:45:03.131371   43491 logs.go:123] Gathering logs for kube-scheduler [26a5313abc2a12853d08d1fb5726fed4b6a0e0ce8cd2e3beb603e648d475d54f] ...
	I1219 03:45:03.131408   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26a5313abc2a12853d08d1fb5726fed4b6a0e0ce8cd2e3beb603e648d475d54f"
	I1219 03:45:03.180266   43491 logs.go:123] Gathering logs for kube-proxy [0cb084af389a1bd0b5c3b71a9e44e94bba7195b499727a892a147b6da31605ed] ...
	I1219 03:45:03.180314   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0cb084af389a1bd0b5c3b71a9e44e94bba7195b499727a892a147b6da31605ed"
	I1219 03:45:03.228279   43491 logs.go:123] Gathering logs for kube-controller-manager [fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8] ...
	I1219 03:45:03.228313   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8"
	W1219 03:45:03.275050   43491 logs.go:130] failed kube-controller-manager [fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8": Process exited with status 1
	stdout:
	
	stderr:
	E1219 03:45:03.256951    4284 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8\": container with ID starting with fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8 not found: ID does not exist" containerID="fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8"
	time="2025-12-19T03:45:03Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8\": container with ID starting with fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1219 03:45:03.256951    4284 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8\": container with ID starting with fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8 not found: ID does not exist" containerID="fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8"
	time="2025-12-19T03:45:03Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8\": container with ID starting with fb035234b551cfda9c8d7e616d0ee814d600644b66619823bf6461aace6531a8 not found: ID does not exist"
	
	** /stderr **
	I1219 03:45:03.275078   43491 logs.go:123] Gathering logs for storage-provisioner [7356291d08314af75a3b5d53d5f33844edd5a37751b81e5a248e5ffbf789859b] ...
	I1219 03:45:03.275095   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7356291d08314af75a3b5d53d5f33844edd5a37751b81e5a248e5ffbf789859b"
	I1219 03:45:03.325803   43491 logs.go:123] Gathering logs for container status ...
	I1219 03:45:03.325836   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1219 03:45:03.384850   43491 out.go:374] Setting ErrFile to fd 2...
	I1219 03:45:03.384886   43491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1219 03:45:03.384959   43491 out.go:285] X Problems detected in kubelet:
	W1219 03:45:03.384980   43491 out.go:285]   Dec 19 03:42:55 running-upgrade-964792 kubelet[1231]: E1219 03:42:55.688316    1231 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:running-upgrade-964792\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-964792' and this object" logger="UnhandledError"
	W1219 03:45:03.384993   43491 out.go:285]   Dec 19 03:42:55 running-upgrade-964792 kubelet[1231]: I1219 03:42:55.688356    1231 status_manager.go:890] "Failed to get status for pod" podUID="d9cef4eb74460bab8c6dbc95b8aae891" pod="kube-system/kube-controller-manager-running-upgrade-964792" err="pods \"kube-controller-manager-running-upgrade-964792\" is forbidden: User \"system:node:running-upgrade-964792\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-964792' and this object"
	W1219 03:45:03.385002   43491 out.go:285]   Dec 19 03:42:55 running-upgrade-964792 kubelet[1231]: E1219 03:42:55.689571    1231 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:running-upgrade-964792\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-964792' and this object" logger="UnhandledError"
	W1219 03:45:03.385012   43491 out.go:285]   Dec 19 03:42:55 running-upgrade-964792 kubelet[1231]: I1219 03:42:55.708997    1231 status_manager.go:890] "Failed to get status for pod" podUID="fbdafa3f-5bac-464e-842f-c9e50c0a0447" pod="kube-system/storage-provisioner" err="pods \"storage-provisioner\" is forbidden: User \"system:node:running-upgrade-964792\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-964792' and this object"
	I1219 03:45:03.385021   43491 out.go:374] Setting ErrFile to fd 2...
	I1219 03:45:03.385035   43491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:45:02.116668   45567 main.go:144] libmachine: waiting for domain to start...
	I1219 03:45:02.118226   45567 main.go:144] libmachine: domain is now running
	I1219 03:45:02.118248   45567 main.go:144] libmachine: waiting for IP...
	I1219 03:45:02.119137   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:02.119760   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:02.119776   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:02.120140   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:02.120178   45567 retry.go:31] will retry after 220.853068ms: waiting for domain to come up
	I1219 03:45:02.342791   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:02.343664   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:02.343687   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:02.344173   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:02.344213   45567 retry.go:31] will retry after 338.10356ms: waiting for domain to come up
	I1219 03:45:02.683968   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:02.684758   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:02.684778   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:02.685236   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:02.685284   45567 retry.go:31] will retry after 380.595104ms: waiting for domain to come up
	I1219 03:45:03.069585   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:03.263637   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:03.263656   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:03.264437   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:03.264484   45567 retry.go:31] will retry after 392.428668ms: waiting for domain to come up
	I1219 03:45:03.658477   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:03.659398   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:03.659421   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:03.660084   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:03.660140   45567 retry.go:31] will retry after 528.805797ms: waiting for domain to come up
	I1219 03:45:04.190923   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:04.191702   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:04.191719   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:04.192106   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:04.192143   45567 retry.go:31] will retry after 874.305615ms: waiting for domain to come up
	I1219 03:45:05.068138   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:05.068894   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:05.068917   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:05.069373   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:05.069408   45567 retry.go:31] will retry after 1.039844515s: waiting for domain to come up
	I1219 03:45:06.110557   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:06.111240   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:06.111258   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:06.111645   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:06.111677   45567 retry.go:31] will retry after 900.852655ms: waiting for domain to come up
	I1219 03:45:07.014605   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:07.015232   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:07.015248   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:07.015634   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:07.015667   45567 retry.go:31] will retry after 1.625422219s: waiting for domain to come up
	I1219 03:45:03.292952   44926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:45:03.293765   44926 addons.go:546] duration metric: took 4.54868ms for enable addons: enabled=[]
	I1219 03:45:03.529635   44926 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:45:03.558489   44926 node_ready.go:35] waiting up to 6m0s for node "pause-813136" to be "Ready" ...
	I1219 03:45:03.562709   44926 node_ready.go:49] node "pause-813136" is "Ready"
	I1219 03:45:03.562738   44926 node_ready.go:38] duration metric: took 4.209733ms for node "pause-813136" to be "Ready" ...
	I1219 03:45:03.562752   44926 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:45:03.562805   44926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:45:03.599465   44926 api_server.go:72] duration metric: took 310.673933ms to wait for apiserver process to appear ...
	I1219 03:45:03.599498   44926 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:45:03.599520   44926 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1219 03:45:03.607925   44926 api_server.go:279] https://192.168.50.162:8443/healthz returned 200:
	ok
	I1219 03:45:03.609182   44926 api_server.go:141] control plane version: v1.34.3
	I1219 03:45:03.609204   44926 api_server.go:131] duration metric: took 9.699245ms to wait for apiserver health ...
	I1219 03:45:03.609214   44926 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:45:03.612197   44926 system_pods.go:59] 6 kube-system pods found
	I1219 03:45:03.612227   44926 system_pods.go:61] "coredns-66bc5c9577-7b6qk" [99badcb6-8825-498e-a57c-e34b1ae19d49] Running
	I1219 03:45:03.612240   44926 system_pods.go:61] "etcd-pause-813136" [e3038183-c5cc-4bdb-82c9-9f74195972cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:45:03.612254   44926 system_pods.go:61] "kube-apiserver-pause-813136" [223ca060-74b0-44c7-b091-b40cad860b31] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:45:03.612282   44926 system_pods.go:61] "kube-controller-manager-pause-813136" [892f5cf0-f3c5-426d-af78-0fbb445e241b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:45:03.612291   44926 system_pods.go:61] "kube-proxy-kxqfm" [12a1b329-3734-4dfb-be1c-6f3ba324c031] Running
	I1219 03:45:03.612301   44926 system_pods.go:61] "kube-scheduler-pause-813136" [85497637-0b2a-468e-b3ce-0c7810cda572] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:45:03.612311   44926 system_pods.go:74] duration metric: took 3.08879ms to wait for pod list to return data ...
	I1219 03:45:03.612324   44926 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:45:03.614937   44926 default_sa.go:45] found service account: "default"
	I1219 03:45:03.614959   44926 default_sa.go:55] duration metric: took 2.627057ms for default service account to be created ...
	I1219 03:45:03.614970   44926 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:45:03.617864   44926 system_pods.go:86] 6 kube-system pods found
	I1219 03:45:03.617892   44926 system_pods.go:89] "coredns-66bc5c9577-7b6qk" [99badcb6-8825-498e-a57c-e34b1ae19d49] Running
	I1219 03:45:03.617910   44926 system_pods.go:89] "etcd-pause-813136" [e3038183-c5cc-4bdb-82c9-9f74195972cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:45:03.617920   44926 system_pods.go:89] "kube-apiserver-pause-813136" [223ca060-74b0-44c7-b091-b40cad860b31] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:45:03.617936   44926 system_pods.go:89] "kube-controller-manager-pause-813136" [892f5cf0-f3c5-426d-af78-0fbb445e241b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:45:03.617942   44926 system_pods.go:89] "kube-proxy-kxqfm" [12a1b329-3734-4dfb-be1c-6f3ba324c031] Running
	I1219 03:45:03.617949   44926 system_pods.go:89] "kube-scheduler-pause-813136" [85497637-0b2a-468e-b3ce-0c7810cda572] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:45:03.617957   44926 system_pods.go:126] duration metric: took 2.981371ms to wait for k8s-apps to be running ...
	I1219 03:45:03.617966   44926 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:45:03.618009   44926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:45:03.646461   44926 system_svc.go:56] duration metric: took 28.486559ms WaitForService to wait for kubelet
	I1219 03:45:03.646490   44926 kubeadm.go:587] duration metric: took 357.703864ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:45:03.646509   44926 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:45:03.649192   44926 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:45:03.649212   44926 node_conditions.go:123] node cpu capacity is 2
	I1219 03:45:03.649225   44926 node_conditions.go:105] duration metric: took 2.710306ms to run NodePressure ...
	I1219 03:45:03.649245   44926 start.go:242] waiting for startup goroutines ...
	I1219 03:45:03.649255   44926 start.go:247] waiting for cluster config update ...
	I1219 03:45:03.649266   44926 start.go:256] writing updated cluster config ...
	I1219 03:45:03.649615   44926 ssh_runner.go:195] Run: rm -f paused
	I1219 03:45:03.659545   44926 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:45:03.660331   44926 kapi.go:59] client config for pause-813136: &rest.Config{Host:"https://192.168.50.162:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/profiles/pause-813136/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/profiles/pause-813136/client.key", CAFile:"/home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1219 03:45:03.665153   44926 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7b6qk" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:03.670719   44926 pod_ready.go:94] pod "coredns-66bc5c9577-7b6qk" is "Ready"
	I1219 03:45:03.670743   44926 pod_ready.go:86] duration metric: took 5.565934ms for pod "coredns-66bc5c9577-7b6qk" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:03.673083   44926 pod_ready.go:83] waiting for pod "etcd-pause-813136" in "kube-system" namespace to be "Ready" or be gone ...
	W1219 03:45:05.680393   44926 pod_ready.go:104] pod "etcd-pause-813136" is not "Ready", error: <nil>
	I1219 03:45:03.813998   45792 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:45:03.814018   45792 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:45:03.814023   45792 cache.go:65] Caching tarball of preloaded images
	I1219 03:45:03.814087   45792 preload.go:238] Found /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:45:03.814093   45792 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:45:03.814164   45792 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/cert-expiration-387964/config.json ...
	I1219 03:45:03.814180   45792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/cert-expiration-387964/config.json: {Name:mk6a2e3f6adc3f0e8b157acf7d31946b1636bce1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:45:03.814340   45792 start.go:360] acquireMachinesLock for cert-expiration-387964: {Name:mk229398d29442d4d52885bbac963e5004bfbfba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:45:08.643455   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:08.644076   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:08.644096   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:08.644547   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:08.644603   45567 retry.go:31] will retry after 1.942522307s: waiting for domain to come up
	I1219 03:45:10.588702   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:10.589377   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:10.589393   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:10.589940   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:10.589980   45567 retry.go:31] will retry after 2.278647694s: waiting for domain to come up
	W1219 03:45:07.680746   44926 pod_ready.go:104] pod "etcd-pause-813136" is not "Ready", error: <nil>
	W1219 03:45:10.183213   44926 pod_ready.go:104] pod "etcd-pause-813136" is not "Ready", error: <nil>
	W1219 03:45:12.678726   44926 pod_ready.go:104] pod "etcd-pause-813136" is not "Ready", error: <nil>
	W1219 03:45:14.679344   44926 pod_ready.go:104] pod "etcd-pause-813136" is not "Ready", error: <nil>
	I1219 03:45:15.178776   44926 pod_ready.go:94] pod "etcd-pause-813136" is "Ready"
	I1219 03:45:15.178800   44926 pod_ready.go:86] duration metric: took 11.505698713s for pod "etcd-pause-813136" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.181512   44926 pod_ready.go:83] waiting for pod "kube-apiserver-pause-813136" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.185065   44926 pod_ready.go:94] pod "kube-apiserver-pause-813136" is "Ready"
	I1219 03:45:15.185082   44926 pod_ready.go:86] duration metric: took 3.553393ms for pod "kube-apiserver-pause-813136" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.188507   44926 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-813136" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.192337   44926 pod_ready.go:94] pod "kube-controller-manager-pause-813136" is "Ready"
	I1219 03:45:15.192354   44926 pod_ready.go:86] duration metric: took 3.832239ms for pod "kube-controller-manager-pause-813136" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.193945   44926 pod_ready.go:83] waiting for pod "kube-proxy-kxqfm" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.376868   44926 pod_ready.go:94] pod "kube-proxy-kxqfm" is "Ready"
	I1219 03:45:15.376894   44926 pod_ready.go:86] duration metric: took 182.927624ms for pod "kube-proxy-kxqfm" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.577249   44926 pod_ready.go:83] waiting for pod "kube-scheduler-pause-813136" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.977537   44926 pod_ready.go:94] pod "kube-scheduler-pause-813136" is "Ready"
	I1219 03:45:15.977585   44926 pod_ready.go:86] duration metric: took 400.283285ms for pod "kube-scheduler-pause-813136" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:45:15.977607   44926 pod_ready.go:40] duration metric: took 12.31801015s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:45:16.022052   44926 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:45:16.023909   44926 out.go:179] * Done! kubectl is now configured to use "pause-813136" cluster and "default" namespace by default
	I1219 03:45:13.386887   43491 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8443/healthz ...
	I1219 03:45:13.387595   43491 api_server.go:269] stopped: https://192.168.72.51:8443/healthz: Get "https://192.168.72.51:8443/healthz": dial tcp 192.168.72.51:8443: connect: connection refused
	I1219 03:45:13.387672   43491 cri.go:57] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1219 03:45:13.387738   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1219 03:45:13.438779   43491 cri.go:92] found id: "585e3cce44d1126e40385c0604426be3495c3b57b7698e0363a5f1983d7e285b"
	I1219 03:45:13.438805   43491 cri.go:92] found id: ""
	I1219 03:45:13.438814   43491 logs.go:282] 1 containers: [585e3cce44d1126e40385c0604426be3495c3b57b7698e0363a5f1983d7e285b]
	I1219 03:45:13.438877   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:13.443119   43491 cri.go:57] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1219 03:45:13.443187   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1219 03:45:13.481236   43491 cri.go:92] found id: "53d800e1ee96bfba31e8f740b14b8da38d8dfab6afc39ed2f1d301b246a67fe7"
	I1219 03:45:13.481259   43491 cri.go:92] found id: ""
	I1219 03:45:13.481270   43491 logs.go:282] 1 containers: [53d800e1ee96bfba31e8f740b14b8da38d8dfab6afc39ed2f1d301b246a67fe7]
	I1219 03:45:13.481334   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:13.485642   43491 cri.go:57] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1219 03:45:13.485713   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1219 03:45:13.524898   43491 cri.go:92] found id: "b59158ce45f13fa82d44cfc35694a0a0243a43dd2b96405d95aa089a34e53d4f"
	I1219 03:45:13.524924   43491 cri.go:92] found id: ""
	I1219 03:45:13.524934   43491 logs.go:282] 1 containers: [b59158ce45f13fa82d44cfc35694a0a0243a43dd2b96405d95aa089a34e53d4f]
	I1219 03:45:13.524996   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:13.530411   43491 cri.go:57] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1219 03:45:13.530484   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1219 03:45:13.580168   43491 cri.go:92] found id: "5e17df7e58c54d28fb401e28a90bb4aa28fde085d64c01000b43817f26687c31"
	I1219 03:45:13.580188   43491 cri.go:92] found id: "26a5313abc2a12853d08d1fb5726fed4b6a0e0ce8cd2e3beb603e648d475d54f"
	I1219 03:45:13.580192   43491 cri.go:92] found id: ""
	I1219 03:45:13.580199   43491 logs.go:282] 2 containers: [5e17df7e58c54d28fb401e28a90bb4aa28fde085d64c01000b43817f26687c31 26a5313abc2a12853d08d1fb5726fed4b6a0e0ce8cd2e3beb603e648d475d54f]
	I1219 03:45:13.580261   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:13.584629   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:13.589773   43491 cri.go:57] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1219 03:45:13.589832   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1219 03:45:13.633853   43491 cri.go:92] found id: "0cb084af389a1bd0b5c3b71a9e44e94bba7195b499727a892a147b6da31605ed"
	I1219 03:45:13.633877   43491 cri.go:92] found id: ""
	I1219 03:45:13.633888   43491 logs.go:282] 1 containers: [0cb084af389a1bd0b5c3b71a9e44e94bba7195b499727a892a147b6da31605ed]
	I1219 03:45:13.633951   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:13.637957   43491 cri.go:57] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1219 03:45:13.638008   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1219 03:45:13.673583   43491 cri.go:92] found id: "ab75f962d8500daa81a9b0d01aef4ec2f6d74130562f0baec94aa1e5a804d4cf"
	I1219 03:45:13.673619   43491 cri.go:92] found id: ""
	I1219 03:45:13.673628   43491 logs.go:282] 1 containers: [ab75f962d8500daa81a9b0d01aef4ec2f6d74130562f0baec94aa1e5a804d4cf]
	I1219 03:45:13.673689   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:13.678971   43491 cri.go:57] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1219 03:45:13.679033   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1219 03:45:13.726701   43491 cri.go:92] found id: ""
	I1219 03:45:13.726728   43491 logs.go:282] 0 containers: []
	W1219 03:45:13.726738   43491 logs.go:284] No container was found matching "kindnet"
	I1219 03:45:13.726745   43491 cri.go:57] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1219 03:45:13.726807   43491 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1219 03:45:13.764699   43491 cri.go:92] found id: "7356291d08314af75a3b5d53d5f33844edd5a37751b81e5a248e5ffbf789859b"
	I1219 03:45:13.764723   43491 cri.go:92] found id: ""
	I1219 03:45:13.764734   43491 logs.go:282] 1 containers: [7356291d08314af75a3b5d53d5f33844edd5a37751b81e5a248e5ffbf789859b]
	I1219 03:45:13.764795   43491 ssh_runner.go:195] Run: which crictl
	I1219 03:45:13.769559   43491 logs.go:123] Gathering logs for kube-proxy [0cb084af389a1bd0b5c3b71a9e44e94bba7195b499727a892a147b6da31605ed] ...
	I1219 03:45:13.769606   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0cb084af389a1bd0b5c3b71a9e44e94bba7195b499727a892a147b6da31605ed"
	I1219 03:45:13.809145   43491 logs.go:123] Gathering logs for kube-controller-manager [ab75f962d8500daa81a9b0d01aef4ec2f6d74130562f0baec94aa1e5a804d4cf] ...
	I1219 03:45:13.809168   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab75f962d8500daa81a9b0d01aef4ec2f6d74130562f0baec94aa1e5a804d4cf"
	I1219 03:45:13.844641   43491 logs.go:123] Gathering logs for storage-provisioner [7356291d08314af75a3b5d53d5f33844edd5a37751b81e5a248e5ffbf789859b] ...
	I1219 03:45:13.844677   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7356291d08314af75a3b5d53d5f33844edd5a37751b81e5a248e5ffbf789859b"
	I1219 03:45:13.882704   43491 logs.go:123] Gathering logs for kubelet ...
	I1219 03:45:13.882737   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1219 03:45:13.976312   43491 logs.go:123] Gathering logs for dmesg ...
	I1219 03:45:13.976343   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1219 03:45:13.990634   43491 logs.go:123] Gathering logs for coredns [b59158ce45f13fa82d44cfc35694a0a0243a43dd2b96405d95aa089a34e53d4f] ...
	I1219 03:45:13.990658   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b59158ce45f13fa82d44cfc35694a0a0243a43dd2b96405d95aa089a34e53d4f"
	I1219 03:45:14.023886   43491 logs.go:123] Gathering logs for CRI-O ...
	I1219 03:45:14.023916   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1219 03:45:14.344510   43491 logs.go:123] Gathering logs for container status ...
	I1219 03:45:14.344542   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1219 03:45:14.381734   43491 logs.go:123] Gathering logs for describe nodes ...
	I1219 03:45:14.381765   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1219 03:45:14.458422   43491 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1219 03:45:14.458442   43491 logs.go:123] Gathering logs for kube-apiserver [585e3cce44d1126e40385c0604426be3495c3b57b7698e0363a5f1983d7e285b] ...
	I1219 03:45:14.458455   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585e3cce44d1126e40385c0604426be3495c3b57b7698e0363a5f1983d7e285b"
	I1219 03:45:14.496781   43491 logs.go:123] Gathering logs for etcd [53d800e1ee96bfba31e8f740b14b8da38d8dfab6afc39ed2f1d301b246a67fe7] ...
	I1219 03:45:14.496807   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53d800e1ee96bfba31e8f740b14b8da38d8dfab6afc39ed2f1d301b246a67fe7"
	I1219 03:45:14.545371   43491 logs.go:123] Gathering logs for kube-scheduler [5e17df7e58c54d28fb401e28a90bb4aa28fde085d64c01000b43817f26687c31] ...
	I1219 03:45:14.545398   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e17df7e58c54d28fb401e28a90bb4aa28fde085d64c01000b43817f26687c31"
	I1219 03:45:14.609404   43491 logs.go:123] Gathering logs for kube-scheduler [26a5313abc2a12853d08d1fb5726fed4b6a0e0ce8cd2e3beb603e648d475d54f] ...
	I1219 03:45:14.609435   43491 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26a5313abc2a12853d08d1fb5726fed4b6a0e0ce8cd2e3beb603e648d475d54f"
	I1219 03:45:12.870606   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:12.871171   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:12.871182   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:12.871559   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:12.871618   45567 retry.go:31] will retry after 2.541514084s: waiting for domain to come up
	I1219 03:45:15.416156   45567 main.go:144] libmachine: domain force-systemd-env-919893 has defined MAC address 52:54:00:4b:b6:04 in network mk-force-systemd-env-919893
	I1219 03:45:15.416798   45567 main.go:144] libmachine: no network interface addresses found for domain force-systemd-env-919893 (source=lease)
	I1219 03:45:15.416813   45567 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:45:15.417124   45567 main.go:144] libmachine: unable to find current IP address of domain force-systemd-env-919893 in network mk-force-systemd-env-919893 (interfaces detected: [])
	I1219 03:45:15.417150   45567 retry.go:31] will retry after 3.988004815s: waiting for domain to come up
	
	
	==> CRI-O <==
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.260436364Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766115918260411588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3c62867-606d-4dec-881a-83bbba4cbf8b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.261332866Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fa0dc76-640f-40c0-9dd6-2d4d76ad7818 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.261415887Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fa0dc76-640f-40c0-9dd6-2d4d76ad7818 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.261711996Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e3b73cba10f1de7e59c5357ad7e6a8855562c9e28088a1f26770e5c2a5c6b10,PodSandboxId:420fab7432d7a415708548ba6a95a89f0781c322e959973f31aac572152a9bfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766115898322374472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81427598da6ffeede2148b7ada3bb803,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f475fc955ac4750ebbe4b3f8c05722a7b60f12916f2a2f3ee30fab6baf28977,PodSandboxId:3c099d483dc21a6a65340493acc982e1ffbb20e3446daed5ce050c4ae86907e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766115898296252330,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59437dc8a57155b1725300f728d58a45,},Annot
ations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abc639ac06f44884351929f22ff79292ad918b0c774486fdc7c323c07395f3,PodSandboxId:32a265f1506cea89cd1c34f047a22965e4fb9999957e1fa798ddd140c2716a55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766115898286432338,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-813136
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9014fff5f634f186b58a0042a2acb36f,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49975ac9ed6143031b48b7ae6cc7d360646e2658e828592bb24fcd876c6d6e64,PodSandboxId:944e3ad7c03797e8f6bdbbfaf57cfb60768e2cad815fea67aed03ec0f0da3a81,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766115898263700136,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b323c3c08c8f8cb968a4a58c7bcdd2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:561fa3e63a07ee523e11e2e89b5973a0193cc158c8eeb0229d903ba3518d5308,PodSandboxId:5b7fd3fafdf1a5e4e0079600a8e7bca3db2114896e424cbc0b7b8611c0dc60ec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Sta
te:CONTAINER_RUNNING,CreatedAt:1766115890062262717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxqfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a1b329-3734-4dfb-be1c-6f3ba324c031,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405eaa8dfbfd9068d752d97691deba809cbdf064d5f44c9b6c0ff2099dd2ab72,PodSandboxId:119c95865e5ce292e89b9c0365a614c958a42b8061f518c58672018745b72391,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17661
15888666790317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7b6qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99badcb6-8825-498e-a57c-e34b1ae19d49,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56805f91d0063dda29f58987a1a7dc2b0e77b097271529fa0b9163bf52ac142,PodSandboxId:119c95865e5ce292e89b9c0365a614c958a42b8061f518
c58672018745b72391,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766115867986072659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7b6qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99badcb6-8825-498e-a57c-e34b1ae19d49,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962f0a8a67335a88760ed2a1a0c705e78a09cf72bc575a1e326bbade3377b38f,PodSandboxId:5b7fd3fafdf1a5e4e0079600a8e7bca3db2114896e424cbc0b7b8611c0dc60ec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766115867099785526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxqfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a1b329-3734-4dfb-be1c-6f3ba324c031,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad4d5a9860531cb61c9a0e3b9d0bf2ff3b7f81b1ef30c9aa66047be00028f5e,PodSandboxId:944e3ad7c03797e8f6bdbbfaf57cfb60768e2cad815fea67aed03ec0f0da3a81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766115867085193269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b323c3c08c8f8cb968a4a58c7bcdd2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff9551c32b87b4e8e57071827c7b999a4d9dccf0cb90592e1b30972b339fc43,PodSandboxId:3c099d483dc21a6a65340493acc982e1ffbb20e3446daed5ce050c4ae86907e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766115867050538295,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59437dc8a57155b1725300f728d58a45,},Annotations:map[string]string{io.kubernetes.container.hash:
20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cd24c1b1176f2a47b1a9b256f8e63a5e52048661701cfcd864c2f888cd8444,PodSandboxId:420fab7432d7a415708548ba6a95a89f0781c322e959973f31aac572152a9bfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766115866999261778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-813136,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 81427598da6ffeede2148b7ada3bb803,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe030c193f1e8b9d4b9f05840eff806d4133ae4e14349cdc0bf44b641318844,PodSandboxId:32a265f1506cea89cd1c34f047a22965e4fb9999957e1fa798ddd140c2716a55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766115866983961320,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9014fff5f634f186b58a0042a2acb36f,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1fa0dc76-640f-40c0-9dd6-2d4d76ad7818 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.298181755Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e11afc4-11ea-43f8-97db-4989bc9052be name=/runtime.v1.RuntimeService/Version
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.298264124Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e11afc4-11ea-43f8-97db-4989bc9052be name=/runtime.v1.RuntimeService/Version
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.299708630Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35dc98f8-3709-416d-a44b-25cb76d0ddaf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.300374031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766115918300351027,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35dc98f8-3709-416d-a44b-25cb76d0ddaf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.301465006Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8bf4946-1a9e-40a1-9e76-628088fa454d name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.301526879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8bf4946-1a9e-40a1-9e76-628088fa454d name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.301804804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e3b73cba10f1de7e59c5357ad7e6a8855562c9e28088a1f26770e5c2a5c6b10,PodSandboxId:420fab7432d7a415708548ba6a95a89f0781c322e959973f31aac572152a9bfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766115898322374472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81427598da6ffeede2148b7ada3bb803,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f475fc955ac4750ebbe4b3f8c05722a7b60f12916f2a2f3ee30fab6baf28977,PodSandboxId:3c099d483dc21a6a65340493acc982e1ffbb20e3446daed5ce050c4ae86907e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766115898296252330,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59437dc8a57155b1725300f728d58a45,},Annot
ations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abc639ac06f44884351929f22ff79292ad918b0c774486fdc7c323c07395f3,PodSandboxId:32a265f1506cea89cd1c34f047a22965e4fb9999957e1fa798ddd140c2716a55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766115898286432338,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-813136
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9014fff5f634f186b58a0042a2acb36f,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49975ac9ed6143031b48b7ae6cc7d360646e2658e828592bb24fcd876c6d6e64,PodSandboxId:944e3ad7c03797e8f6bdbbfaf57cfb60768e2cad815fea67aed03ec0f0da3a81,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766115898263700136,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b323c3c08c8f8cb968a4a58c7bcdd2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:561fa3e63a07ee523e11e2e89b5973a0193cc158c8eeb0229d903ba3518d5308,PodSandboxId:5b7fd3fafdf1a5e4e0079600a8e7bca3db2114896e424cbc0b7b8611c0dc60ec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Sta
te:CONTAINER_RUNNING,CreatedAt:1766115890062262717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxqfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a1b329-3734-4dfb-be1c-6f3ba324c031,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405eaa8dfbfd9068d752d97691deba809cbdf064d5f44c9b6c0ff2099dd2ab72,PodSandboxId:119c95865e5ce292e89b9c0365a614c958a42b8061f518c58672018745b72391,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17661
15888666790317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7b6qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99badcb6-8825-498e-a57c-e34b1ae19d49,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56805f91d0063dda29f58987a1a7dc2b0e77b097271529fa0b9163bf52ac142,PodSandboxId:119c95865e5ce292e89b9c0365a614c958a42b8061f518
c58672018745b72391,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766115867986072659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7b6qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99badcb6-8825-498e-a57c-e34b1ae19d49,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962f0a8a67335a88760ed2a1a0c705e78a09cf72bc575a1e326bbade3377b38f,PodSandboxId:5b7fd3fafdf1a5e4e0079600a8e7bca3db2114896e424cbc0b7b8611c0dc60ec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766115867099785526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxqfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a1b329-3734-4dfb-be1c-6f3ba324c031,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad4d5a9860531cb61c9a0e3b9d0bf2ff3b7f81b1ef30c9aa66047be00028f5e,PodSandboxId:944e3ad7c03797e8f6bdbbfaf57cfb60768e2cad815fea67aed03ec0f0da3a81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766115867085193269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b323c3c08c8f8cb968a4a58c7bcdd2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff9551c32b87b4e8e57071827c7b999a4d9dccf0cb90592e1b30972b339fc43,PodSandboxId:3c099d483dc21a6a65340493acc982e1ffbb20e3446daed5ce050c4ae86907e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766115867050538295,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59437dc8a57155b1725300f728d58a45,},Annotations:map[string]string{io.kubernetes.container.hash:
20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cd24c1b1176f2a47b1a9b256f8e63a5e52048661701cfcd864c2f888cd8444,PodSandboxId:420fab7432d7a415708548ba6a95a89f0781c322e959973f31aac572152a9bfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766115866999261778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-813136,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 81427598da6ffeede2148b7ada3bb803,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe030c193f1e8b9d4b9f05840eff806d4133ae4e14349cdc0bf44b641318844,PodSandboxId:32a265f1506cea89cd1c34f047a22965e4fb9999957e1fa798ddd140c2716a55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766115866983961320,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9014fff5f634f186b58a0042a2acb36f,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8bf4946-1a9e-40a1-9e76-628088fa454d name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.337574887Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ddf7513-b3d7-45a1-86fe-7847de442c3d name=/runtime.v1.RuntimeService/Version
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.337776593Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ddf7513-b3d7-45a1-86fe-7847de442c3d name=/runtime.v1.RuntimeService/Version
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.339174377Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1520bc8-5329-43d9-9f65-087a1e88f418 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.339644515Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766115918339624537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1520bc8-5329-43d9-9f65-087a1e88f418 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.340498944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e73e30ee-670e-4333-aaf9-71fbf71e810e name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.340644186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e73e30ee-670e-4333-aaf9-71fbf71e810e name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.341093595Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e3b73cba10f1de7e59c5357ad7e6a8855562c9e28088a1f26770e5c2a5c6b10,PodSandboxId:420fab7432d7a415708548ba6a95a89f0781c322e959973f31aac572152a9bfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766115898322374472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81427598da6ffeede2148b7ada3bb803,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f475fc955ac4750ebbe4b3f8c05722a7b60f12916f2a2f3ee30fab6baf28977,PodSandboxId:3c099d483dc21a6a65340493acc982e1ffbb20e3446daed5ce050c4ae86907e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766115898296252330,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59437dc8a57155b1725300f728d58a45,},Annot
ations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abc639ac06f44884351929f22ff79292ad918b0c774486fdc7c323c07395f3,PodSandboxId:32a265f1506cea89cd1c34f047a22965e4fb9999957e1fa798ddd140c2716a55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766115898286432338,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-813136
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9014fff5f634f186b58a0042a2acb36f,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49975ac9ed6143031b48b7ae6cc7d360646e2658e828592bb24fcd876c6d6e64,PodSandboxId:944e3ad7c03797e8f6bdbbfaf57cfb60768e2cad815fea67aed03ec0f0da3a81,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766115898263700136,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b323c3c08c8f8cb968a4a58c7bcdd2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:561fa3e63a07ee523e11e2e89b5973a0193cc158c8eeb0229d903ba3518d5308,PodSandboxId:5b7fd3fafdf1a5e4e0079600a8e7bca3db2114896e424cbc0b7b8611c0dc60ec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Sta
te:CONTAINER_RUNNING,CreatedAt:1766115890062262717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxqfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a1b329-3734-4dfb-be1c-6f3ba324c031,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405eaa8dfbfd9068d752d97691deba809cbdf064d5f44c9b6c0ff2099dd2ab72,PodSandboxId:119c95865e5ce292e89b9c0365a614c958a42b8061f518c58672018745b72391,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17661
15888666790317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7b6qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99badcb6-8825-498e-a57c-e34b1ae19d49,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56805f91d0063dda29f58987a1a7dc2b0e77b097271529fa0b9163bf52ac142,PodSandboxId:119c95865e5ce292e89b9c0365a614c958a42b8061f518
c58672018745b72391,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766115867986072659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7b6qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99badcb6-8825-498e-a57c-e34b1ae19d49,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962f0a8a67335a88760ed2a1a0c705e78a09cf72bc575a1e326bbade3377b38f,PodSandboxId:5b7fd3fafdf1a5e4e0079600a8e7bca3db2114896e424cbc0b7b8611c0dc60ec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766115867099785526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxqfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a1b329-3734-4dfb-be1c-6f3ba324c031,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad4d5a9860531cb61c9a0e3b9d0bf2ff3b7f81b1ef30c9aa66047be00028f5e,PodSandboxId:944e3ad7c03797e8f6bdbbfaf57cfb60768e2cad815fea67aed03ec0f0da3a81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766115867085193269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b323c3c08c8f8cb968a4a58c7bcdd2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff9551c32b87b4e8e57071827c7b999a4d9dccf0cb90592e1b30972b339fc43,PodSandboxId:3c099d483dc21a6a65340493acc982e1ffbb20e3446daed5ce050c4ae86907e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766115867050538295,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59437dc8a57155b1725300f728d58a45,},Annotations:map[string]string{io.kubernetes.container.hash:
20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cd24c1b1176f2a47b1a9b256f8e63a5e52048661701cfcd864c2f888cd8444,PodSandboxId:420fab7432d7a415708548ba6a95a89f0781c322e959973f31aac572152a9bfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766115866999261778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-813136,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 81427598da6ffeede2148b7ada3bb803,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe030c193f1e8b9d4b9f05840eff806d4133ae4e14349cdc0bf44b641318844,PodSandboxId:32a265f1506cea89cd1c34f047a22965e4fb9999957e1fa798ddd140c2716a55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766115866983961320,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9014fff5f634f186b58a0042a2acb36f,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e73e30ee-670e-4333-aaf9-71fbf71e810e name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.376979506Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b9e2d56-affb-4da3-b261-01d7b69f344f name=/runtime.v1.RuntimeService/Version
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.377048049Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b9e2d56-affb-4da3-b261-01d7b69f344f name=/runtime.v1.RuntimeService/Version
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.378741973Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bbf9b625-4bf6-481a-8e37-e9907c701a5a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.379412836Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766115918379384219,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bbf9b625-4bf6-481a-8e37-e9907c701a5a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.380303413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7878a9e-6934-4865-90f8-c6ea60c85cd5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.380390565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7878a9e-6934-4865-90f8-c6ea60c85cd5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 03:45:18 pause-813136 crio[2828]: time="2025-12-19 03:45:18.380626630Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e3b73cba10f1de7e59c5357ad7e6a8855562c9e28088a1f26770e5c2a5c6b10,PodSandboxId:420fab7432d7a415708548ba6a95a89f0781c322e959973f31aac572152a9bfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766115898322374472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81427598da6ffeede2148b7ada3bb803,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f475fc955ac4750ebbe4b3f8c05722a7b60f12916f2a2f3ee30fab6baf28977,PodSandboxId:3c099d483dc21a6a65340493acc982e1ffbb20e3446daed5ce050c4ae86907e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766115898296252330,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59437dc8a57155b1725300f728d58a45,},Annot
ations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49abc639ac06f44884351929f22ff79292ad918b0c774486fdc7c323c07395f3,PodSandboxId:32a265f1506cea89cd1c34f047a22965e4fb9999957e1fa798ddd140c2716a55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766115898286432338,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-813136
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9014fff5f634f186b58a0042a2acb36f,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49975ac9ed6143031b48b7ae6cc7d360646e2658e828592bb24fcd876c6d6e64,PodSandboxId:944e3ad7c03797e8f6bdbbfaf57cfb60768e2cad815fea67aed03ec0f0da3a81,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766115898263700136,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b323c3c08c8f8cb968a4a58c7bcdd2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:561fa3e63a07ee523e11e2e89b5973a0193cc158c8eeb0229d903ba3518d5308,PodSandboxId:5b7fd3fafdf1a5e4e0079600a8e7bca3db2114896e424cbc0b7b8611c0dc60ec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Sta
te:CONTAINER_RUNNING,CreatedAt:1766115890062262717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxqfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a1b329-3734-4dfb-be1c-6f3ba324c031,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405eaa8dfbfd9068d752d97691deba809cbdf064d5f44c9b6c0ff2099dd2ab72,PodSandboxId:119c95865e5ce292e89b9c0365a614c958a42b8061f518c58672018745b72391,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17661
15888666790317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7b6qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99badcb6-8825-498e-a57c-e34b1ae19d49,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56805f91d0063dda29f58987a1a7dc2b0e77b097271529fa0b9163bf52ac142,PodSandboxId:119c95865e5ce292e89b9c0365a614c958a42b8061f518
c58672018745b72391,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766115867986072659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7b6qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99badcb6-8825-498e-a57c-e34b1ae19d49,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962f0a8a67335a88760ed2a1a0c705e78a09cf72bc575a1e326bbade3377b38f,PodSandboxId:5b7fd3fafdf1a5e4e0079600a8e7bca3db2114896e424cbc0b7b8611c0dc60ec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766115867099785526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxqfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a1b329-3734-4dfb-be1c-6f3ba324c031,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad4d5a9860531cb61c9a0e3b9d0bf2ff3b7f81b1ef30c9aa66047be00028f5e,PodSandboxId:944e3ad7c03797e8f6bdbbfaf57cfb60768e2cad815fea67aed03ec0f0da3a81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766115867085193269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b323c3c08c8f8cb968a4a58c7bcdd2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff9551c32b87b4e8e57071827c7b999a4d9dccf0cb90592e1b30972b339fc43,PodSandboxId:3c099d483dc21a6a65340493acc982e1ffbb20e3446daed5ce050c4ae86907e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766115867050538295,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59437dc8a57155b1725300f728d58a45,},Annotations:map[string]string{io.kubernetes.container.hash:
20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cd24c1b1176f2a47b1a9b256f8e63a5e52048661701cfcd864c2f888cd8444,PodSandboxId:420fab7432d7a415708548ba6a95a89f0781c322e959973f31aac572152a9bfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766115866999261778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-813136,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 81427598da6ffeede2148b7ada3bb803,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe030c193f1e8b9d4b9f05840eff806d4133ae4e14349cdc0bf44b641318844,PodSandboxId:32a265f1506cea89cd1c34f047a22965e4fb9999957e1fa798ddd140c2716a55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766115866983961320,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-813136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9014fff5f634f186b58a0042a2acb36f,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7878a9e-6934-4865-90f8-c6ea60c85cd5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0e3b73cba10f1       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   20 seconds ago      Running             kube-controller-manager   2                   420fab7432d7a       kube-controller-manager-pause-813136   kube-system
	9f475fc955ac4       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   20 seconds ago      Running             kube-scheduler            2                   3c099d483dc21       kube-scheduler-pause-813136            kube-system
	49abc639ac06f       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   20 seconds ago      Running             kube-apiserver            2                   32a265f1506ce       kube-apiserver-pause-813136            kube-system
	49975ac9ed614       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   20 seconds ago      Running             etcd                      2                   944e3ad7c0379       etcd-pause-813136                      kube-system
	561fa3e63a07e       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   28 seconds ago      Running             kube-proxy                2                   5b7fd3fafdf1a       kube-proxy-kxqfm                       kube-system
	405eaa8dfbfd9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   29 seconds ago      Running             coredns                   2                   119c95865e5ce       coredns-66bc5c9577-7b6qk               kube-system
	e56805f91d006       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   50 seconds ago      Exited              coredns                   1                   119c95865e5ce       coredns-66bc5c9577-7b6qk               kube-system
	962f0a8a67335       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   51 seconds ago      Exited              kube-proxy                1                   5b7fd3fafdf1a       kube-proxy-kxqfm                       kube-system
	5ad4d5a986053       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   51 seconds ago      Exited              etcd                      1                   944e3ad7c0379       etcd-pause-813136                      kube-system
	9ff9551c32b87       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   51 seconds ago      Exited              kube-scheduler            1                   3c099d483dc21       kube-scheduler-pause-813136            kube-system
	18cd24c1b1176       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   51 seconds ago      Exited              kube-controller-manager   1                   420fab7432d7a       kube-controller-manager-pause-813136   kube-system
	4fe030c193f1e       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   51 seconds ago      Exited              kube-apiserver            1                   32a265f1506ce       kube-apiserver-pause-813136            kube-system
	
	
	==> coredns [405eaa8dfbfd9068d752d97691deba809cbdf064d5f44c9b6c0ff2099dd2ab72] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60179 - 49965 "HINFO IN 634547219968973713.595630748506419850. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.074980332s
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [e56805f91d0063dda29f58987a1a7dc2b0e77b097271529fa0b9163bf52ac142] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:34071 - 8539 "HINFO IN 6655959907211568805.2874153447990880425. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026333941s
	
	
	==> describe nodes <==
	Name:               pause-813136
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-813136
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=pause-813136
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_43_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:43:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-813136
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:45:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:45:01 +0000   Fri, 19 Dec 2025 03:43:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:45:01 +0000   Fri, 19 Dec 2025 03:43:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:45:01 +0000   Fri, 19 Dec 2025 03:43:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:45:01 +0000   Fri, 19 Dec 2025 03:43:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.162
	  Hostname:    pause-813136
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 a3118eb3fc0f42039cb96b373705a0ee
	  System UUID:                a3118eb3-fc0f-4203-9cb9-6b373705a0ee
	  Boot ID:                    42f1e44e-9222-404e-aa9c-3ad45b425eff
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-7b6qk                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     108s
	  kube-system                 etcd-pause-813136                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         113s
	  kube-system                 kube-apiserver-pause-813136             250m (12%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-pause-813136    200m (10%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-kxqfm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-pause-813136             100m (5%)     0 (0%)      0 (0%)           0 (0%)         113s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 106s                 kube-proxy       
	  Normal   Starting                 28s                  kube-proxy       
	  Normal   Starting                 47s                  kube-proxy       
	  Normal   NodeHasSufficientPID     2m1s (x7 over 2m1s)  kubelet          Node pause-813136 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node pause-813136 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node pause-813136 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 114s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  113s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  113s                 kubelet          Node pause-813136 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    113s                 kubelet          Node pause-813136 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     113s                 kubelet          Node pause-813136 status is now: NodeHasSufficientPID
	  Normal   NodeReady                112s                 kubelet          Node pause-813136 status is now: NodeReady
	  Normal   RegisteredNode           109s                 node-controller  Node pause-813136 event: Registered Node pause-813136 in Controller
	  Warning  ContainerGCFailed        53s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           44s                  node-controller  Node pause-813136 event: Registered Node pause-813136 in Controller
	  Normal   Starting                 21s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node pause-813136 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node pause-813136 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node pause-813136 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           14s                  node-controller  Node pause-813136 event: Registered Node pause-813136 in Controller
	
	
	==> dmesg <==
	[Dec19 03:42] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000049] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec19 03:43] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.161158] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083944] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.111911] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.150433] kauditd_printk_skb: 171 callbacks suppressed
	[  +6.003343] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.041261] kauditd_printk_skb: 219 callbacks suppressed
	[Dec19 03:44] kauditd_printk_skb: 38 callbacks suppressed
	[  +4.115386] kauditd_printk_skb: 319 callbacks suppressed
	[  +4.385570] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.151668] kauditd_printk_skb: 29 callbacks suppressed
	[Dec19 03:45] kauditd_printk_skb: 79 callbacks suppressed
	
	
	==> etcd [49975ac9ed6143031b48b7ae6cc7d360646e2658e828592bb24fcd876c6d6e64] <==
	{"level":"warn","ts":"2025-12-19T03:45:00.286045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.316051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.343111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.359102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.369638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.374932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.387532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.396016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.408272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.420385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.432529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.443363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.461934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.472934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.484777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.508187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.515442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.527061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.541084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.557190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.560055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.582203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.595959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.608337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:45:00.712282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35260","server-name":"","error":"EOF"}
	
	
	==> etcd [5ad4d5a9860531cb61c9a0e3b9d0bf2ff3b7f81b1ef30c9aa66047be00028f5e] <==
	{"level":"warn","ts":"2025-12-19T03:44:29.925347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:44:29.935914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:44:29.953722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:44:29.967399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:44:29.981806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:44:30.004875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:44:30.050493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55496","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:44:38.677439Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-19T03:44:38.677650Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-813136","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.162:2380"],"advertise-client-urls":["https://192.168.50.162:2379"]}
	{"level":"error","ts":"2025-12-19T03:44:38.677877Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T03:44:45.683100Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T03:44:45.688106Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T03:44:45.688778Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"25a84ac227828bb5","current-leader-member-id":"25a84ac227828bb5"}
	{"level":"info","ts":"2025-12-19T03:44:45.689383Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-19T03:44:45.689456Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-19T03:44:45.689546Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T03:44:45.690167Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T03:44:45.690189Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-19T03:44:45.689621Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.162:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T03:44:45.690207Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.162:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T03:44:45.690215Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.162:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T03:44:45.695766Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.162:2380"}
	{"level":"error","ts":"2025-12-19T03:44:45.696070Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.162:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T03:44:45.696180Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.162:2380"}
	{"level":"info","ts":"2025-12-19T03:44:45.696314Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-813136","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.162:2380"],"advertise-client-urls":["https://192.168.50.162:2379"]}
	
	
	==> kernel <==
	 03:45:18 up 2 min,  0 users,  load average: 1.21, 0.45, 0.16
	Linux pause-813136 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [49abc639ac06f44884351929f22ff79292ad918b0c774486fdc7c323c07395f3] <==
	I1219 03:45:01.482570       1 autoregister_controller.go:144] Starting autoregister controller
	I1219 03:45:01.482576       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1219 03:45:01.537166       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:45:01.541240       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1219 03:45:01.541274       1 policy_source.go:240] refreshing policies
	I1219 03:45:01.556420       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1219 03:45:01.562313       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1219 03:45:01.563088       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1219 03:45:01.566656       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1219 03:45:01.566684       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1219 03:45:01.571485       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1219 03:45:01.572807       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1219 03:45:01.574096       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1219 03:45:01.575371       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1219 03:45:01.606825       1 cache.go:39] Caches are synced for autoregister controller
	I1219 03:45:01.615417       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1219 03:45:01.904840       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1219 03:45:02.372748       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1219 03:45:03.118602       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:45:03.164044       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:45:03.201355       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:45:03.209514       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:45:04.893200       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:45:05.191557       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:45:05.289195       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [4fe030c193f1e8b9d4b9f05840eff806d4133ae4e14349cdc0bf44b641318844] <==
	W1219 03:44:54.779235       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:54.801641       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:54.922430       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:54.989437       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.003054       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.026311       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.064913       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.069818       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.115320       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.139217       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.154931       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.159505       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.174408       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.286264       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.301910       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.329398       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.342197       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.535189       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.537636       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.606791       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.704776       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.721439       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.796928       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.873906       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1219 03:44:55.984064       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0e3b73cba10f1de7e59c5357ad7e6a8855562c9e28088a1f26770e5c2a5c6b10] <==
	I1219 03:45:04.886091       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1219 03:45:04.886316       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 03:45:04.885362       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1219 03:45:04.887179       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1219 03:45:04.885370       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1219 03:45:04.889210       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1219 03:45:04.890288       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1219 03:45:04.892189       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1219 03:45:04.893316       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 03:45:04.894219       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1219 03:45:04.894391       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 03:45:04.895984       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1219 03:45:04.896596       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1219 03:45:04.898497       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:45:04.900375       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1219 03:45:04.901041       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1219 03:45:04.903689       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:45:04.906957       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 03:45:04.920187       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1219 03:45:04.922553       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1219 03:45:04.926836       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1219 03:45:04.931099       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1219 03:45:04.935843       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 03:45:04.935878       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1219 03:45:04.938720       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [18cd24c1b1176f2a47b1a9b256f8e63a5e52048661701cfcd864c2f888cd8444] <==
	I1219 03:44:34.057709       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:44:34.057950       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 03:44:34.058477       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 03:44:34.062918       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 03:44:34.067823       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:44:34.069640       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1219 03:44:34.071900       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1219 03:44:34.074146       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:44:34.080367       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:44:34.080392       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 03:44:34.080398       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1219 03:44:34.080831       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:44:34.085967       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1219 03:44:34.088168       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1219 03:44:34.090412       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1219 03:44:34.105903       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 03:44:34.105915       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1219 03:44:34.106054       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1219 03:44:34.106092       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1219 03:44:34.106356       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1219 03:44:34.106644       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 03:44:34.106742       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-813136"
	I1219 03:44:34.106858       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1219 03:44:34.107608       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 03:44:34.109159       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [561fa3e63a07ee523e11e2e89b5973a0193cc158c8eeb0229d903ba3518d5308] <==
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:44:50.338989       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:44:50.339018       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:44:50.347258       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:44:50.347500       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:44:50.347524       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:44:50.351382       1 config.go:200] "Starting service config controller"
	I1219 03:44:50.351505       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:44:50.351587       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:44:50.351593       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:44:50.351615       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:44:50.351618       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:44:50.353664       1 config.go:309] "Starting node config controller"
	I1219 03:44:50.353702       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:44:50.353778       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:44:50.451646       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:44:50.451666       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:44:50.451648       1 shared_informer.go:356] "Caches are synced" controller="service config"
	E1219 03:44:56.106366       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": unexpected EOF"
	E1219 03:45:01.493200       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1219 03:45:01.493532       1 reflector.go:205] "Failed to watch" err="nodes \"pause-813136\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 03:45:01.493826       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 03:45:01.494252       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	
	
	==> kube-proxy [962f0a8a67335a88760ed2a1a0c705e78a09cf72bc575a1e326bbade3377b38f] <==
	I1219 03:44:29.295625       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:44:30.798186       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:44:30.798225       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.162"]
	E1219 03:44:30.798316       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:44:30.901604       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:44:30.901682       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:44:30.901704       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:44:30.912886       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:44:30.913234       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:44:30.913330       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:44:30.914925       1 config.go:200] "Starting service config controller"
	I1219 03:44:30.914965       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:44:30.915049       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:44:30.915068       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:44:30.915187       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:44:30.915210       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:44:30.916928       1 config.go:309] "Starting node config controller"
	I1219 03:44:30.916952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:44:30.916963       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:44:31.015898       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:44:31.015904       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:44:31.015935       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9f475fc955ac4750ebbe4b3f8c05722a7b60f12916f2a2f3ee30fab6baf28977] <==
	I1219 03:45:00.244986       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:45:01.480849       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:45:01.480885       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:45:01.480896       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:45:01.480902       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:45:01.536566       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:45:01.536713       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:45:01.539636       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:45:01.539684       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:45:01.541100       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:45:01.541349       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:45:01.640444       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [9ff9551c32b87b4e8e57071827c7b999a4d9dccf0cb90592e1b30972b339fc43] <==
	I1219 03:44:28.648758       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:44:30.655063       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:44:30.655102       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:44:30.655155       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:44:30.655165       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:44:30.750564       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:44:30.750798       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:44:30.755830       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:44:30.755868       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:44:30.756072       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:44:30.756212       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:44:30.856314       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:44:45.770353       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1219 03:44:45.770484       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1219 03:44:45.770590       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1219 03:44:45.770793       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:44:45.771002       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1219 03:44:45.771206       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 19 03:45:00 pause-813136 kubelet[4144]: E1219 03:45:00.982061    4144 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-813136\" not found" node="pause-813136"
	Dec 19 03:45:00 pause-813136 kubelet[4144]: E1219 03:45:00.982370    4144 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-813136\" not found" node="pause-813136"
	Dec 19 03:45:00 pause-813136 kubelet[4144]: E1219 03:45:00.983596    4144 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-813136\" not found" node="pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.523248    4144 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.607703    4144 kubelet_node_status.go:124] "Node was previously registered" node="pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.607816    4144 kubelet_node_status.go:78] "Successfully registered node" node="pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.607873    4144 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.609854    4144 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: E1219 03:45:01.646672    4144 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-813136\" already exists" pod="kube-system/etcd-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.646714    4144 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: E1219 03:45:01.658764    4144 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-813136\" already exists" pod="kube-system/kube-apiserver-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.658804    4144 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: E1219 03:45:01.671188    4144 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-813136\" already exists" pod="kube-system/kube-controller-manager-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.671216    4144 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: E1219 03:45:01.682707    4144 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-813136\" already exists" pod="kube-system/kube-scheduler-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.794162    4144 apiserver.go:52] "Watching apiserver"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.825696    4144 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.901580    4144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12a1b329-3734-4dfb-be1c-6f3ba324c031-xtables-lock\") pod \"kube-proxy-kxqfm\" (UID: \"12a1b329-3734-4dfb-be1c-6f3ba324c031\") " pod="kube-system/kube-proxy-kxqfm"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.901705    4144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12a1b329-3734-4dfb-be1c-6f3ba324c031-lib-modules\") pod \"kube-proxy-kxqfm\" (UID: \"12a1b329-3734-4dfb-be1c-6f3ba324c031\") " pod="kube-system/kube-proxy-kxqfm"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: I1219 03:45:01.984606    4144 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-813136"
	Dec 19 03:45:01 pause-813136 kubelet[4144]: E1219 03:45:01.995560    4144 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-813136\" already exists" pod="kube-system/kube-apiserver-pause-813136"
	Dec 19 03:45:07 pause-813136 kubelet[4144]: E1219 03:45:07.941198    4144 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766115907940870517  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 19 03:45:07 pause-813136 kubelet[4144]: E1219 03:45:07.941235    4144 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766115907940870517  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 19 03:45:17 pause-813136 kubelet[4144]: E1219 03:45:17.942987    4144 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766115917942583397  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 19 03:45:17 pause-813136 kubelet[4144]: E1219 03:45:17.943028    4144 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766115917942583397  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-813136 -n pause-813136
helpers_test.go:270: (dbg) Run:  kubectl --context pause-813136 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (67.19s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "which crictl"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "which crictl": context deadline exceeded (2.436µs)
iso_test.go:78: failed to verify existence of "crictl" binary : args "out/minikube-linux-amd64 -p guest-783207 ssh \"which crictl\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/crictl (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "which curl"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "which curl": context deadline exceeded (447ns)
iso_test.go:78: failed to verify existence of "curl" binary : args "out/minikube-linux-amd64 -p guest-783207 ssh \"which curl\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/curl (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "which docker"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "which docker": context deadline exceeded (251ns)
iso_test.go:78: failed to verify existence of "docker" binary : args "out/minikube-linux-amd64 -p guest-783207 ssh \"which docker\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/docker (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "which git"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "which git": context deadline exceeded (346ns)
iso_test.go:78: failed to verify existence of "git" binary : args "out/minikube-linux-amd64 -p guest-783207 ssh \"which git\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/git (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "which iptables"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "which iptables": context deadline exceeded (372ns)
iso_test.go:78: failed to verify existence of "iptables" binary : args "out/minikube-linux-amd64 -p guest-783207 ssh \"which iptables\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/iptables (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "which podman"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "which podman": context deadline exceeded (504ns)
iso_test.go:78: failed to verify existence of "podman" binary : args "out/minikube-linux-amd64 -p guest-783207 ssh \"which podman\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/podman (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "which rsync"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "which rsync": context deadline exceeded (369ns)
iso_test.go:78: failed to verify existence of "rsync" binary : args "out/minikube-linux-amd64 -p guest-783207 ssh \"which rsync\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/rsync (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "which socat"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "which socat": context deadline exceeded (385ns)
iso_test.go:78: failed to verify existence of "socat" binary : args "out/minikube-linux-amd64 -p guest-783207 ssh \"which socat\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/socat (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "which wget"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "which wget": context deadline exceeded (417ns)
iso_test.go:78: failed to verify existence of "wget" binary : args "out/minikube-linux-amd64 -p guest-783207 ssh \"which wget\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/wget (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "which VBoxControl"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "which VBoxControl": context deadline exceeded (609ns)
iso_test.go:78: failed to verify existence of "VBoxControl" binary : args "out/minikube-linux-amd64 -p guest-783207 ssh \"which VBoxControl\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/VBoxControl (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "which VBoxService"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "which VBoxService": context deadline exceeded (313ns)
iso_test.go:78: failed to verify existence of "VBoxService" binary : args "out/minikube-linux-amd64 -p guest-783207 ssh \"which VBoxService\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/VBoxService (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 03:54:01.562073    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:14.836409    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:18.512460    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094166 -n old-k8s-version-094166
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-12-19 04:02:55.992758858 +0000 UTC m=+5879.030938171
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094166 -n old-k8s-version-094166
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-094166 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-094166 logs -n 25: (1.476339014s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ ssh     │ -p bridge-542624 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo containerd config dump                                                                                                                                                                                                │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo crio config                                                                                                                                                                                                           │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ delete  │ -p bridge-542624                                                                                                                                                                                                                            │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ delete  │ -p disable-driver-mounts-189846                                                                                                                                                                                                             │ disable-driver-mounts-189846 │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ start   │ -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-094166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ stop    │ -p old-k8s-version-094166 --alsologtostderr -v=3                                                                                                                                                                                            │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p no-preload-298059 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ stop    │ -p no-preload-298059 --alsologtostderr -v=3                                                                                                                                                                                                 │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p embed-certs-244717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                    │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:52 UTC │
	│ stop    │ -p embed-certs-244717 --alsologtostderr -v=3                                                                                                                                                                                                │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-168174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                          │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:52 UTC │
	│ stop    │ -p default-k8s-diff-port-168174 --alsologtostderr -v=3                                                                                                                                                                                      │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-094166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                           │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p old-k8s-version-094166 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable dashboard -p no-preload-298059 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p no-preload-298059 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-244717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p embed-certs-244717 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-168174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                     │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ start   │ -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 04:01 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:54:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:54:19.163618   56230 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:54:19.163755   56230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:54:19.163766   56230 out.go:374] Setting ErrFile to fd 2...
	I1219 03:54:19.163773   56230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:54:19.164086   56230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:54:19.164710   56230 out.go:368] Setting JSON to false
	I1219 03:54:19.166058   56230 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5803,"bootTime":1766110656,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:54:19.166138   56230 start.go:143] virtualization: kvm guest
	I1219 03:54:19.167819   56230 out.go:179] * [default-k8s-diff-port-168174] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:54:19.168806   56230 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:54:19.168798   56230 notify.go:221] Checking for updates...
	I1219 03:54:19.170649   56230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:54:19.171718   56230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:19.172800   56230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 03:54:19.173680   56230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:54:19.174607   56230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:54:19.176155   56230 config.go:182] Loaded profile config "default-k8s-diff-port-168174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:19.176843   56230 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:54:19.221795   56230 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 03:54:19.222673   56230 start.go:309] selected driver: kvm2
	I1219 03:54:19.222686   56230 start.go:928] validating driver "kvm2" against &{Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:19.222787   56230 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:54:19.223700   56230 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:54:19.223731   56230 cni.go:84] Creating CNI manager for ""
	I1219 03:54:19.223785   56230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:19.223821   56230 start.go:353] cluster config:
	{Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:19.223901   56230 iso.go:125] acquiring lock: {Name:mk42290af04a74a4dddf27fa33aac85c9bdccfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:54:19.225058   56230 out.go:179] * Starting "default-k8s-diff-port-168174" primary control-plane node in "default-k8s-diff-port-168174" cluster
	I1219 03:54:19.225891   56230 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:54:19.225925   56230 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:54:19.225937   56230 cache.go:65] Caching tarball of preloaded images
	I1219 03:54:19.226014   56230 preload.go:238] Found /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:54:19.226025   56230 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:54:19.226103   56230 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/config.json ...
	I1219 03:54:19.226379   56230 start.go:360] acquireMachinesLock for default-k8s-diff-port-168174: {Name:mk229398d29442d4d52885bbac963e5004bfbfba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:54:19.226434   56230 start.go:364] duration metric: took 34.138µs to acquireMachinesLock for "default-k8s-diff-port-168174"
	I1219 03:54:19.226446   56230 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:54:19.226451   56230 fix.go:54] fixHost starting: 
	I1219 03:54:19.228163   56230 fix.go:112] recreateIfNeeded on default-k8s-diff-port-168174: state=Stopped err=<nil>
	W1219 03:54:19.228180   56230 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:54:16.533332   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:17.359209   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:17.532886   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:18.033640   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:18.533499   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:19.033373   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:19.533624   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:20.033318   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:20.532932   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:21.032204   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:18.384127   55957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:18.420807   55957 api_server.go:72] duration metric: took 1.537508247s to wait for apiserver process to appear ...
	I1219 03:54:18.420840   55957 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:18.420862   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.071318   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:21.071349   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:21.071368   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.151121   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:21.151151   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:21.421632   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.426745   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:21.426773   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:21.921398   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.927340   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:21.927368   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:22.420988   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:22.428236   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 200:
	ok
	I1219 03:54:22.439161   55957 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:22.439190   55957 api_server.go:131] duration metric: took 4.018341977s to wait for apiserver health ...
	I1219 03:54:22.439202   55957 cni.go:84] Creating CNI manager for ""
	I1219 03:54:22.439211   55957 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:22.440712   55957 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:54:22.442679   55957 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:54:22.464908   55957 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:54:22.524765   55957 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:22.531030   55957 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:22.531082   55957 system_pods.go:61] "coredns-66bc5c9577-9ptrv" [22226444-faa6-420d-a862-1ef0441a80e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:22.531096   55957 system_pods.go:61] "etcd-embed-certs-244717" [24fc3b7f-cbce-4591-9d18-9859136e38c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:22.531109   55957 system_pods.go:61] "kube-apiserver-embed-certs-244717" [67d6bc2f-88f4-449a-a029-dc66d6ad1468] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:22.531117   55957 system_pods.go:61] "kube-controller-manager-embed-certs-244717" [f2be3fb7-4440-43a3-96ec-b2b241393a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:22.531126   55957 system_pods.go:61] "kube-proxy-p8gvm" [283607b2-9e6c-44f4-9c9d-7d713c71fb8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:22.531135   55957 system_pods.go:61] "kube-scheduler-embed-certs-244717" [243ff9cd-448c-4e21-98aa-032de9f329fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:22.531151   55957 system_pods.go:61] "metrics-server-746fcd58dc-x74d4" [e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:22.531159   55957 system_pods.go:61] "storage-provisioner" [99ff9c60-2f30-457a-8cb5-e030eb64a58e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:54:22.531169   55957 system_pods.go:74] duration metric: took 6.378453ms to wait for pod list to return data ...
	I1219 03:54:22.531184   55957 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:22.538334   55957 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:22.538361   55957 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:22.538378   55957 node_conditions.go:105] duration metric: took 7.188571ms to run NodePressure ...
	I1219 03:54:22.538434   55957 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:22.838171   55957 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1219 03:54:22.841979   55957 kubeadm.go:744] kubelet initialised
	I1219 03:54:22.842009   55957 kubeadm.go:745] duration metric: took 3.812738ms waiting for restarted kubelet to initialise ...
	I1219 03:54:22.842027   55957 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:54:22.858280   55957 ops.go:34] apiserver oom_adj: -16
	I1219 03:54:22.858296   55957 kubeadm.go:602] duration metric: took 8.274282939s to restartPrimaryControlPlane
	I1219 03:54:22.858304   55957 kubeadm.go:403] duration metric: took 8.332738451s to StartCluster
	I1219 03:54:22.858319   55957 settings.go:142] acquiring lock: {Name:mkb131693a40b9aa50c302192272518c3561c861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:22.858398   55957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:22.860091   55957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:22.860306   55957 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.54 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:54:22.860397   55957 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:54:22.860520   55957 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-244717"
	I1219 03:54:22.860540   55957 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-244717"
	W1219 03:54:22.860553   55957 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:54:22.860556   55957 addons.go:70] Setting default-storageclass=true in profile "embed-certs-244717"
	I1219 03:54:22.860588   55957 config.go:182] Loaded profile config "embed-certs-244717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:22.860638   55957 addons.go:70] Setting dashboard=true in profile "embed-certs-244717"
	I1219 03:54:22.860664   55957 addons.go:239] Setting addon dashboard=true in "embed-certs-244717"
	W1219 03:54:22.860674   55957 addons.go:248] addon dashboard should already be in state true
	I1219 03:54:22.860596   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.860698   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.860603   55957 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-244717"
	I1219 03:54:22.860613   55957 addons.go:70] Setting metrics-server=true in profile "embed-certs-244717"
	I1219 03:54:22.861202   55957 addons.go:239] Setting addon metrics-server=true in "embed-certs-244717"
	W1219 03:54:22.861219   55957 addons.go:248] addon metrics-server should already be in state true
	I1219 03:54:22.861243   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.861875   55957 out.go:179] * Verifying Kubernetes components...
	I1219 03:54:22.862820   55957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:22.863427   55957 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:22.863444   55957 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:54:22.864891   55957 addons.go:239] Setting addon default-storageclass=true in "embed-certs-244717"
	W1219 03:54:22.864914   55957 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:54:22.864935   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.866702   55957 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:22.866730   55957 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:54:22.866703   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.866913   55957 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:54:22.867359   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.867391   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.867616   55957 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:54:22.867638   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:22.868328   55957 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:22.868344   55957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:54:22.868968   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:54:22.869019   55957 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:54:22.870937   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.871717   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.871748   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.871986   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:22.872790   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873111   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873212   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.873235   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873423   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:22.873635   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.873666   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873832   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:23.104462   55957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:23.139781   55957 node_ready.go:35] waiting up to 6m0s for node "embed-certs-244717" to be "Ready" ...
	I1219 03:54:19.229464   56230 out.go:252] * Restarting existing kvm2 VM for "default-k8s-diff-port-168174" ...
	I1219 03:54:19.229501   56230 main.go:144] libmachine: starting domain...
	I1219 03:54:19.229509   56230 main.go:144] libmachine: ensuring networks are active...
	I1219 03:54:19.230233   56230 main.go:144] libmachine: Ensuring network default is active
	I1219 03:54:19.230721   56230 main.go:144] libmachine: Ensuring network mk-default-k8s-diff-port-168174 is active
	I1219 03:54:19.231248   56230 main.go:144] libmachine: getting domain XML...
	I1219 03:54:19.232369   56230 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>default-k8s-diff-port-168174</name>
	  <uuid>5503b0a8-1398-475d-b625-563c5bc2d168</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/default-k8s-diff-port-168174.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:d9:97:a2'/>
	      <source network='mk-default-k8s-diff-port-168174'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:3f:9e:c8'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 03:54:20.662520   56230 main.go:144] libmachine: waiting for domain to start...
	I1219 03:54:20.663943   56230 main.go:144] libmachine: domain is now running
	I1219 03:54:20.663969   56230 main.go:144] libmachine: waiting for IP...
	I1219 03:54:20.664770   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.665467   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has current primary IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.665481   56230 main.go:144] libmachine: found domain IP: 192.168.50.68
	I1219 03:54:20.665486   56230 main.go:144] libmachine: reserving static IP address...
	I1219 03:54:20.665943   56230 main.go:144] libmachine: found host DHCP lease matching {name: "default-k8s-diff-port-168174", mac: "52:54:00:d9:97:a2", ip: "192.168.50.68"} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:51:35 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:20.665989   56230 main.go:144] libmachine: skip adding static IP to network mk-default-k8s-diff-port-168174 - found existing host DHCP lease matching {name: "default-k8s-diff-port-168174", mac: "52:54:00:d9:97:a2", ip: "192.168.50.68"}
	I1219 03:54:20.666003   56230 main.go:144] libmachine: reserved static IP address 192.168.50.68 for domain default-k8s-diff-port-168174
	I1219 03:54:20.666019   56230 main.go:144] libmachine: waiting for SSH...
	I1219 03:54:20.666027   56230 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 03:54:20.668799   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.669225   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:51:35 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:20.669267   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.669495   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:20.669789   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:20.669805   56230 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 03:54:23.725788   56230 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.50.68:22: connect: no route to host
	I1219 03:54:21.532614   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:22.032788   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:22.532959   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:23.032773   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:23.531977   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:24.033500   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:24.532177   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:25.033441   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:25.533482   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:26.031758   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:23.198551   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:23.404667   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:23.420466   55957 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:54:23.445604   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:54:23.445631   55957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:54:23.525300   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:54:23.525326   55957 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:54:23.593759   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:23.593784   55957 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:54:23.645141   55957 node_ready.go:49] node "embed-certs-244717" is "Ready"
	I1219 03:54:23.645171   55957 node_ready.go:38] duration metric: took 505.352434ms for node "embed-certs-244717" to be "Ready" ...
	I1219 03:54:23.645183   55957 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:54:23.645241   55957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:23.652800   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:24.781529   55957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.376827148s)
	I1219 03:54:24.781591   55957 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.361072264s)
	I1219 03:54:24.781616   55957 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.136359787s)
	I1219 03:54:24.781638   55957 api_server.go:72] duration metric: took 1.9213054s to wait for apiserver process to appear ...
	I1219 03:54:24.781645   55957 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:24.781662   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:24.781671   55957 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:54:24.791019   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 200:
	ok
	I1219 03:54:24.791945   55957 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:24.791970   55957 api_server.go:131] duration metric: took 10.31791ms to wait for apiserver health ...
	I1219 03:54:24.791980   55957 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:24.795539   55957 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:24.795599   55957 system_pods.go:61] "coredns-66bc5c9577-9ptrv" [22226444-faa6-420d-a862-1ef0441a80e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:24.795612   55957 system_pods.go:61] "etcd-embed-certs-244717" [24fc3b7f-cbce-4591-9d18-9859136e38c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:24.795627   55957 system_pods.go:61] "kube-apiserver-embed-certs-244717" [67d6bc2f-88f4-449a-a029-dc66d6ad1468] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:24.795638   55957 system_pods.go:61] "kube-controller-manager-embed-certs-244717" [f2be3fb7-4440-43a3-96ec-b2b241393a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:24.795644   55957 system_pods.go:61] "kube-proxy-p8gvm" [283607b2-9e6c-44f4-9c9d-7d713c71fb8c] Running
	I1219 03:54:24.795655   55957 system_pods.go:61] "kube-scheduler-embed-certs-244717" [243ff9cd-448c-4e21-98aa-032de9f329fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:24.795666   55957 system_pods.go:61] "metrics-server-746fcd58dc-x74d4" [e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:24.795671   55957 system_pods.go:61] "storage-provisioner" [99ff9c60-2f30-457a-8cb5-e030eb64a58e] Running
	I1219 03:54:24.795683   55957 system_pods.go:74] duration metric: took 3.696303ms to wait for pod list to return data ...
	I1219 03:54:24.795694   55957 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:54:24.797860   55957 default_sa.go:45] found service account: "default"
	I1219 03:54:24.797884   55957 default_sa.go:55] duration metric: took 2.181869ms for default service account to be created ...
	I1219 03:54:24.797895   55957 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:54:24.800212   55957 system_pods.go:86] 8 kube-system pods found
	I1219 03:54:24.800242   55957 system_pods.go:89] "coredns-66bc5c9577-9ptrv" [22226444-faa6-420d-a862-1ef0441a80e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:24.800255   55957 system_pods.go:89] "etcd-embed-certs-244717" [24fc3b7f-cbce-4591-9d18-9859136e38c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:24.800267   55957 system_pods.go:89] "kube-apiserver-embed-certs-244717" [67d6bc2f-88f4-449a-a029-dc66d6ad1468] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:24.800277   55957 system_pods.go:89] "kube-controller-manager-embed-certs-244717" [f2be3fb7-4440-43a3-96ec-b2b241393a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:24.800283   55957 system_pods.go:89] "kube-proxy-p8gvm" [283607b2-9e6c-44f4-9c9d-7d713c71fb8c] Running
	I1219 03:54:24.800291   55957 system_pods.go:89] "kube-scheduler-embed-certs-244717" [243ff9cd-448c-4e21-98aa-032de9f329fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:24.800300   55957 system_pods.go:89] "metrics-server-746fcd58dc-x74d4" [e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:24.800307   55957 system_pods.go:89] "storage-provisioner" [99ff9c60-2f30-457a-8cb5-e030eb64a58e] Running
	I1219 03:54:24.800317   55957 system_pods.go:126] duration metric: took 2.415918ms to wait for k8s-apps to be running ...
	I1219 03:54:24.800326   55957 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:54:24.800389   55957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:54:24.901954   55957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.249113047s)
	I1219 03:54:24.901997   55957 addons.go:500] Verifying addon metrics-server=true in "embed-certs-244717"
	I1219 03:54:24.902043   55957 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:54:24.902053   55957 system_svc.go:56] duration metric: took 101.72157ms WaitForService to wait for kubelet
	I1219 03:54:24.902083   55957 kubeadm.go:587] duration metric: took 2.041739112s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:54:24.902106   55957 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:24.912597   55957 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:24.912623   55957 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:24.912638   55957 node_conditions.go:105] duration metric: took 10.525951ms to run NodePressure ...
	I1219 03:54:24.912652   55957 start.go:242] waiting for startup goroutines ...
	I1219 03:54:25.801998   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:54:29.507152   55957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.70510669s)
	I1219 03:54:29.507259   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:29.992247   55957 addons.go:500] Verifying addon dashboard=true in "embed-certs-244717"
	I1219 03:54:29.995517   55957 out.go:179] * Verifying dashboard addon...
	I1219 03:54:26.531479   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:27.031454   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:27.532215   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:28.032964   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:28.532268   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.032253   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.533154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:30.032379   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:30.532853   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:31.032643   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.998065   55957 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:54:30.003541   55957 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:54:30.003561   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:30.510371   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:31.003319   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:31.502854   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.002809   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.503083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.001709   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.805953   56230 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.50.68:22: connect: no route to host
	I1219 03:54:32.806901   56230 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.50.68:22: connect: connection refused
	I1219 03:54:31.531396   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.033946   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.532063   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.033088   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.532601   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.032154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.532734   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.031403   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.532231   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:36.031798   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.501421   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.001823   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.501944   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.001242   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.502033   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:36.001834   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:36.503279   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.002832   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.501859   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.002218   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.914133   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:54:35.917629   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.918062   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:35.918084   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.918331   56230 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/config.json ...
	I1219 03:54:35.918603   56230 machine.go:94] provisionDockerMachine start ...
	I1219 03:54:35.921009   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.921341   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:35.921380   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.921581   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:35.921797   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:35.921810   56230 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:54:36.027619   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 03:54:36.027644   56230 buildroot.go:166] provisioning hostname "default-k8s-diff-port-168174"
	I1219 03:54:36.030973   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.031540   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.031597   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.031855   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:36.032105   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:36.032121   56230 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-168174 && echo "default-k8s-diff-port-168174" | sudo tee /etc/hostname
	I1219 03:54:36.154920   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-168174
	
	I1219 03:54:36.157818   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.158270   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.158298   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.158481   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:36.158705   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:36.158721   56230 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-168174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-168174/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-168174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:54:36.278763   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:54:36.278793   56230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5010/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5010/.minikube}
	I1219 03:54:36.278815   56230 buildroot.go:174] setting up certificates
	I1219 03:54:36.278825   56230 provision.go:84] configureAuth start
	I1219 03:54:36.282034   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.282595   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.282631   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.285039   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.285396   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.285421   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.285558   56230 provision.go:143] copyHostCerts
	I1219 03:54:36.285634   56230 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem, removing ...
	I1219 03:54:36.285655   56230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem
	I1219 03:54:36.285732   56230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem (1082 bytes)
	I1219 03:54:36.285873   56230 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem, removing ...
	I1219 03:54:36.285889   56230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem
	I1219 03:54:36.285939   56230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem (1123 bytes)
	I1219 03:54:36.286034   56230 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem, removing ...
	I1219 03:54:36.286044   56230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem
	I1219 03:54:36.286086   56230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem (1679 bytes)
	I1219 03:54:36.286187   56230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-168174 san=[127.0.0.1 192.168.50.68 default-k8s-diff-port-168174 localhost minikube]
	I1219 03:54:36.425832   56230 provision.go:177] copyRemoteCerts
	I1219 03:54:36.425892   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:54:36.428255   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.428656   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.428686   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.428839   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:36.519020   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 03:54:36.558591   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:54:36.592448   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:54:36.618754   56230 provision.go:87] duration metric: took 339.918165ms to configureAuth
	I1219 03:54:36.618782   56230 buildroot.go:189] setting minikube options for container-runtime
	I1219 03:54:36.618965   56230 config.go:182] Loaded profile config "default-k8s-diff-port-168174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:36.622080   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.622643   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.622690   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.622932   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:36.623146   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:36.623170   56230 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:54:36.870072   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:54:36.870099   56230 machine.go:97] duration metric: took 951.477635ms to provisionDockerMachine
	I1219 03:54:36.870113   56230 start.go:293] postStartSetup for "default-k8s-diff-port-168174" (driver="kvm2")
	I1219 03:54:36.870125   56230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:54:36.870211   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:54:36.873360   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.873824   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.873854   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.873997   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:36.957455   56230 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:54:36.962098   56230 info.go:137] Remote host: Buildroot 2025.02
	I1219 03:54:36.962123   56230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/addons for local assets ...
	I1219 03:54:36.962187   56230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/files for local assets ...
	I1219 03:54:36.962258   56230 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem -> 89372.pem in /etc/ssl/certs
	I1219 03:54:36.962365   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:54:36.973208   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /etc/ssl/certs/89372.pem (1708 bytes)
	I1219 03:54:37.001535   56230 start.go:296] duration metric: took 131.409863ms for postStartSetup
	I1219 03:54:37.001590   56230 fix.go:56] duration metric: took 17.775113489s for fixHost
	I1219 03:54:37.004880   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.005287   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.005312   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.005528   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:37.005820   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:37.005839   56230 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 03:54:37.113597   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766116477.079572846
	
	I1219 03:54:37.113621   56230 fix.go:216] guest clock: 1766116477.079572846
	I1219 03:54:37.113630   56230 fix.go:229] Guest: 2025-12-19 03:54:37.079572846 +0000 UTC Remote: 2025-12-19 03:54:37.001596336 +0000 UTC m=+17.891500693 (delta=77.97651ms)
	I1219 03:54:37.113645   56230 fix.go:200] guest clock delta is within tolerance: 77.97651ms
	I1219 03:54:37.113651   56230 start.go:83] releasing machines lock for "default-k8s-diff-port-168174", held for 17.887209269s
	I1219 03:54:37.116322   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.116867   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.116898   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.117549   56230 ssh_runner.go:195] Run: cat /version.json
	I1219 03:54:37.117645   56230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:54:37.121299   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.121532   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.121841   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.121885   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.122114   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:37.122168   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.122203   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.122439   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:37.200188   56230 ssh_runner.go:195] Run: systemctl --version
	I1219 03:54:37.236006   56230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:54:37.382400   56230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:54:37.391093   56230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:54:37.391172   56230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:54:37.412549   56230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:54:37.412595   56230 start.go:496] detecting cgroup driver to use...
	I1219 03:54:37.412701   56230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:54:37.432292   56230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:54:37.448705   56230 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:54:37.448757   56230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:54:37.464885   56230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:54:37.488524   56230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:54:37.648374   56230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:54:37.863271   56230 docker.go:234] disabling docker service ...
	I1219 03:54:37.863333   56230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:54:37.880285   56230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:54:37.895631   56230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:54:38.053642   56230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:54:38.210829   56230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:54:38.227130   56230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:54:38.248699   56230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:54:38.248763   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.260875   56230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1219 03:54:38.260939   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.273032   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.284839   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.296706   56230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:54:38.309100   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.320373   56230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.343213   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.355251   56230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:54:38.366693   56230 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 03:54:38.366745   56230 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 03:54:38.386325   56230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:54:38.397641   56230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:38.542778   56230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:54:38.656266   56230 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:54:38.656354   56230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:54:38.662225   56230 start.go:564] Will wait 60s for crictl version
	I1219 03:54:38.662286   56230 ssh_runner.go:195] Run: which crictl
	I1219 03:54:38.666072   56230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 03:54:38.702242   56230 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1219 03:54:38.702324   56230 ssh_runner.go:195] Run: crio --version
	I1219 03:54:38.730733   56230 ssh_runner.go:195] Run: crio --version
	I1219 03:54:38.760806   56230 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1219 03:54:38.764622   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:38.765017   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:38.765041   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:38.765207   56230 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1219 03:54:38.769555   56230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:54:38.784218   56230 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:54:38.784318   56230 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:54:38.784389   56230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:38.817654   56230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1219 03:54:38.817721   56230 ssh_runner.go:195] Run: which lz4
	I1219 03:54:38.821795   56230 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1219 03:54:38.826295   56230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1219 03:54:38.826327   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
	I1219 03:54:36.531538   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.031367   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.531677   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.031134   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.532312   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.032552   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.532678   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.031267   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.531858   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:41.032379   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.502453   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.002949   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.502152   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.002580   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.501440   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:41.002612   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:41.501822   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.002247   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.502196   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.002641   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.045060   56230 crio.go:462] duration metric: took 1.223302426s to copy over tarball
	I1219 03:54:40.045121   56230 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1219 03:54:41.702628   56230 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.657483082s)
	I1219 03:54:41.702653   56230 crio.go:469] duration metric: took 1.657571319s to extract the tarball
	I1219 03:54:41.702661   56230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1219 03:54:41.742396   56230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:41.778250   56230 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:54:41.778274   56230 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:54:41.778281   56230 kubeadm.go:935] updating node { 192.168.50.68 8444 v1.34.3 crio true true} ...
	I1219 03:54:41.778393   56230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-168174 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:54:41.778466   56230 ssh_runner.go:195] Run: crio config
	I1219 03:54:41.824084   56230 cni.go:84] Creating CNI manager for ""
	I1219 03:54:41.824114   56230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:41.824134   56230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:54:41.824161   56230 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.68 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-168174 NodeName:default-k8s-diff-port-168174 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:54:41.824332   56230 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.68
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-168174"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.68"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.68"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:54:41.824436   56230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:54:41.838181   56230 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:54:41.838263   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:54:41.850122   56230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1219 03:54:41.871647   56230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:54:41.891031   56230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1219 03:54:41.910970   56230 ssh_runner.go:195] Run: grep 192.168.50.68	control-plane.minikube.internal$ /etc/hosts
	I1219 03:54:41.915265   56230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.68	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:54:41.929042   56230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:42.077837   56230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:42.111492   56230 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174 for IP: 192.168.50.68
	I1219 03:54:42.111515   56230 certs.go:195] generating shared ca certs ...
	I1219 03:54:42.111529   56230 certs.go:227] acquiring lock for ca certs: {Name:mk433b742ffd55233764aebe3c6f298e0d1f02ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:42.111713   56230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key
	I1219 03:54:42.111782   56230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key
	I1219 03:54:42.111804   56230 certs.go:257] generating profile certs ...
	I1219 03:54:42.111942   56230 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/client.key
	I1219 03:54:42.112027   56230 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/apiserver.key.ed8a7a08
	I1219 03:54:42.112078   56230 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/proxy-client.key
	I1219 03:54:42.112201   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem (1338 bytes)
	W1219 03:54:42.112240   56230 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937_empty.pem, impossibly tiny 0 bytes
	I1219 03:54:42.112252   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:54:42.112280   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem (1082 bytes)
	I1219 03:54:42.112309   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:54:42.112361   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem (1679 bytes)
	I1219 03:54:42.112423   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem (1708 bytes)
	I1219 03:54:42.113420   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:54:42.154291   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:54:42.194006   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:54:42.221732   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:54:42.253007   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:54:42.280935   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1219 03:54:42.315083   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:54:42.342426   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:54:42.371444   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:54:42.402350   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem --> /usr/share/ca-certificates/8937.pem (1338 bytes)
	I1219 03:54:42.430533   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /usr/share/ca-certificates/89372.pem (1708 bytes)
	I1219 03:54:42.462798   56230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:54:42.483977   56230 ssh_runner.go:195] Run: openssl version
	I1219 03:54:42.490839   56230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.503565   56230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89372.pem /etc/ssl/certs/89372.pem
	I1219 03:54:42.514852   56230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.520693   56230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:46 /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.520739   56230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.528108   56230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:54:42.539720   56230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89372.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:54:42.550915   56230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.561679   56230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:54:42.572526   56230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.577725   56230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.577781   56230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.584786   56230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:54:42.596115   56230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:54:42.607332   56230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.618682   56230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8937.pem /etc/ssl/certs/8937.pem
	I1219 03:54:42.630292   56230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.635409   56230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:46 /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.635452   56230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.642710   56230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:54:42.654104   56230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8937.pem /etc/ssl/certs/51391683.0
	I1219 03:54:42.666207   56230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:54:42.671385   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:54:42.678373   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:54:42.685534   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:54:42.692140   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:54:42.698549   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:54:42.705279   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:54:42.712285   56230 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:42.712383   56230 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:54:42.712433   56230 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:54:42.745951   56230 cri.go:92] found id: ""
	I1219 03:54:42.746000   56230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:54:42.757185   56230 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:54:42.757201   56230 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:54:42.757240   56230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:54:42.768155   56230 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:54:42.769156   56230 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-168174" does not appear in /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:42.769826   56230 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-5010/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-168174" cluster setting kubeconfig missing "default-k8s-diff-port-168174" context setting]
	I1219 03:54:42.770666   56230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:42.772207   56230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:54:42.782776   56230 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.50.68
	I1219 03:54:42.782799   56230 kubeadm.go:1161] stopping kube-system containers ...
	I1219 03:54:42.782811   56230 cri.go:57] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1219 03:54:42.782853   56230 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:54:42.827373   56230 cri.go:92] found id: ""
	I1219 03:54:42.827451   56230 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1219 03:54:42.855644   56230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:54:42.867640   56230 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:54:42.867664   56230 kubeadm.go:158] found existing configuration files:
	
	I1219 03:54:42.867713   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1219 03:54:42.879242   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:54:42.879345   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:54:42.890737   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1219 03:54:42.900979   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:54:42.901033   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:54:42.911989   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1219 03:54:42.922081   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:54:42.922121   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:54:42.933197   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1219 03:54:42.943650   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:54:42.943706   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:54:42.954819   56230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:54:42.965503   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:43.022499   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:41.533216   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.031785   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.531762   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.032044   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.531965   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.032348   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.532701   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.032707   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.531729   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:46.032031   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.502226   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.002160   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.502401   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.002719   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.502332   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:46.001536   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:46.501990   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.002547   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.501403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:48.002631   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.652743   56230 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.630210852s)
	I1219 03:54:44.652817   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:44.912221   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:44.996004   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:45.067644   56230 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:54:45.067725   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:45.568080   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:46.068722   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:46.568114   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:47.068013   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:47.117129   56230 api_server.go:72] duration metric: took 2.049494189s to wait for apiserver process to appear ...
	I1219 03:54:47.117153   56230 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:47.117174   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:47.117680   56230 api_server.go:269] stopped: https://192.168.50.68:8444/healthz: Get "https://192.168.50.68:8444/healthz": dial tcp 192.168.50.68:8444: connect: connection refused
	I1219 03:54:47.617323   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:46.534635   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.031616   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.531182   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:48.032359   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:48.532986   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.031214   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.532385   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.032130   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.532478   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.031638   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.988621   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:49.988647   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:49.988661   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:50.015383   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:50.015404   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:50.117699   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:50.129872   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:50.129895   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:50.617488   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:50.622220   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:50.622255   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:51.117929   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:51.126710   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:51.126741   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:51.617345   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:51.622349   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 200:
	ok
	I1219 03:54:51.628913   56230 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:51.628947   56230 api_server.go:131] duration metric: took 4.511785965s to wait for apiserver health ...
	I1219 03:54:51.628957   56230 cni.go:84] Creating CNI manager for ""
	I1219 03:54:51.628965   56230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:51.630494   56230 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:54:51.631426   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:54:51.647385   56230 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:54:51.669320   56230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:51.675232   56230 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:51.675273   56230 system_pods.go:61] "coredns-66bc5c9577-dnfcc" [b8ee66a7-b129-4499-aad8-a988ecea241c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:51.675288   56230 system_pods.go:61] "etcd-default-k8s-diff-port-168174" [af7e1768-95b9-431e-877a-63138a91dcdc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:51.675298   56230 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-168174" [a59f79b0-b99b-4092-8b4a-773ff0a04569] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:51.675318   56230 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-168174" [405651c9-cdc7-414f-a512-f11b7b580eb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:51.675328   56230 system_pods.go:61] "kube-proxy-zs4wg" [2212782c-32ab-4355-8dda-9117953b0223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:51.675338   56230 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-168174" [a524e706-e546-4aa4-848c-f52eb802ffb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:51.675347   56230 system_pods.go:61] "metrics-server-746fcd58dc-xjkbx" [c6e2f2b2-7b94-4ff2-85ba-e79d72b30655] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:51.675358   56230 system_pods.go:61] "storage-provisioner" [ec1d1888-a950-48d5-9b73-440e7556818b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:54:51.675366   56230 system_pods.go:74] duration metric: took 6.023523ms to wait for pod list to return data ...
	I1219 03:54:51.675387   56230 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:51.680456   56230 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:51.680483   56230 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:51.680500   56230 node_conditions.go:105] duration metric: took 5.106096ms to run NodePressure ...
	I1219 03:54:51.680558   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:51.941503   56230 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1219 03:54:51.945528   56230 kubeadm.go:744] kubelet initialised
	I1219 03:54:51.945566   56230 kubeadm.go:745] duration metric: took 4.028139ms waiting for restarted kubelet to initialise ...
	I1219 03:54:51.945597   56230 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:54:51.967660   56230 ops.go:34] apiserver oom_adj: -16
	I1219 03:54:51.967680   56230 kubeadm.go:602] duration metric: took 9.210474475s to restartPrimaryControlPlane
	I1219 03:54:51.967689   56230 kubeadm.go:403] duration metric: took 9.255411647s to StartCluster
	I1219 03:54:51.967705   56230 settings.go:142] acquiring lock: {Name:mkb131693a40b9aa50c302192272518c3561c861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:51.967787   56230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:51.970216   56230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:51.970558   56230 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:54:51.970693   56230 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:54:51.970789   56230 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970812   56230 config.go:182] Loaded profile config "default-k8s-diff-port-168174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:51.970826   56230 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-168174"
	I1219 03:54:51.970825   56230 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970846   56230 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970858   56230 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-168174"
	I1219 03:54:51.970858   56230 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970884   56230 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-168174"
	W1219 03:54:51.970893   56230 addons.go:248] addon dashboard should already be in state true
	I1219 03:54:51.970919   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	W1219 03:54:51.970836   56230 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:54:51.970978   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	I1219 03:54:51.970861   56230 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-168174"
	W1219 03:54:51.971035   56230 addons.go:248] addon metrics-server should already be in state true
	I1219 03:54:51.971057   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	I1219 03:54:51.971960   56230 out.go:179] * Verifying Kubernetes components...
	I1219 03:54:51.973008   56230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:51.974650   56230 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:51.974726   56230 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:54:51.974952   56230 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:54:51.975006   56230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:54:48.502712   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.001711   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.501592   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.001601   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.501313   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.002296   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.502360   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.002651   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.503108   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.002165   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.975433   56230 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-168174"
	W1219 03:54:51.975454   56230 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:54:51.975493   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	I1219 03:54:51.975992   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:54:51.976010   56230 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:54:51.976037   56230 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:51.976049   56230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:54:51.978029   56230 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:51.978047   56230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:54:51.979030   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.979580   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.979617   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.979992   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:51.980624   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.980627   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981054   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.981088   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981091   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.981123   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981299   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:51.981430   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981442   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:51.981908   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.981931   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.982118   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:52.329267   56230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:52.362110   56230 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-168174" to be "Ready" ...
	I1219 03:54:52.365712   56230 node_ready.go:49] node "default-k8s-diff-port-168174" is "Ready"
	I1219 03:54:52.365740   56230 node_ready.go:38] duration metric: took 3.595186ms for node "default-k8s-diff-port-168174" to be "Ready" ...
	I1219 03:54:52.365758   56230 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:54:52.365821   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:52.390728   56230 api_server.go:72] duration metric: took 420.108978ms to wait for apiserver process to appear ...
	I1219 03:54:52.390759   56230 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:52.390781   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:52.397481   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 200:
	ok
	I1219 03:54:52.398595   56230 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:52.398619   56230 api_server.go:131] duration metric: took 7.851716ms to wait for apiserver health ...
	I1219 03:54:52.398634   56230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:52.403556   56230 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:52.403621   56230 system_pods.go:61] "coredns-66bc5c9577-dnfcc" [b8ee66a7-b129-4499-aad8-a988ecea241c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:52.403638   56230 system_pods.go:61] "etcd-default-k8s-diff-port-168174" [af7e1768-95b9-431e-877a-63138a91dcdc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:52.403653   56230 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-168174" [a59f79b0-b99b-4092-8b4a-773ff0a04569] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:52.403664   56230 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-168174" [405651c9-cdc7-414f-a512-f11b7b580eb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:52.403676   56230 system_pods.go:61] "kube-proxy-zs4wg" [2212782c-32ab-4355-8dda-9117953b0223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:52.403690   56230 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-168174" [a524e706-e546-4aa4-848c-f52eb802ffb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:52.403705   56230 system_pods.go:61] "metrics-server-746fcd58dc-xjkbx" [c6e2f2b2-7b94-4ff2-85ba-e79d72b30655] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:52.403714   56230 system_pods.go:61] "storage-provisioner" [ec1d1888-a950-48d5-9b73-440e7556818b] Running
	I1219 03:54:52.403725   56230 system_pods.go:74] duration metric: took 5.080532ms to wait for pod list to return data ...
	I1219 03:54:52.403737   56230 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:54:52.406964   56230 default_sa.go:45] found service account: "default"
	I1219 03:54:52.406989   56230 default_sa.go:55] duration metric: took 3.241415ms for default service account to be created ...
	I1219 03:54:52.406999   56230 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:54:52.412763   56230 system_pods.go:86] 8 kube-system pods found
	I1219 03:54:52.412787   56230 system_pods.go:89] "coredns-66bc5c9577-dnfcc" [b8ee66a7-b129-4499-aad8-a988ecea241c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:52.412797   56230 system_pods.go:89] "etcd-default-k8s-diff-port-168174" [af7e1768-95b9-431e-877a-63138a91dcdc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:52.412804   56230 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-168174" [a59f79b0-b99b-4092-8b4a-773ff0a04569] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:52.412810   56230 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-168174" [405651c9-cdc7-414f-a512-f11b7b580eb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:52.412816   56230 system_pods.go:89] "kube-proxy-zs4wg" [2212782c-32ab-4355-8dda-9117953b0223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:52.412821   56230 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-168174" [a524e706-e546-4aa4-848c-f52eb802ffb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:52.412826   56230 system_pods.go:89] "metrics-server-746fcd58dc-xjkbx" [c6e2f2b2-7b94-4ff2-85ba-e79d72b30655] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:52.412830   56230 system_pods.go:89] "storage-provisioner" [ec1d1888-a950-48d5-9b73-440e7556818b] Running
	I1219 03:54:52.412837   56230 system_pods.go:126] duration metric: took 5.832618ms to wait for k8s-apps to be running ...
	I1219 03:54:52.412847   56230 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:54:52.412890   56230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:54:52.437131   56230 system_svc.go:56] duration metric: took 24.267658ms WaitForService to wait for kubelet
	I1219 03:54:52.437166   56230 kubeadm.go:587] duration metric: took 466.551246ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:54:52.437188   56230 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:52.440753   56230 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:52.440776   56230 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:52.440789   56230 node_conditions.go:105] duration metric: took 3.595658ms to run NodePressure ...
	I1219 03:54:52.440804   56230 start.go:242] waiting for startup goroutines ...
	I1219 03:54:52.571235   56230 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:54:52.579720   56230 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:54:52.588696   56230 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:54:52.607999   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:52.623079   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:54:52.623103   56230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:54:52.632201   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:52.689775   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:54:52.689802   56230 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:54:52.755241   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:52.755280   56230 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:54:52.860818   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:51.531836   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.032945   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.532771   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.031681   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.532510   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.032369   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.532915   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.031905   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.531152   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.032011   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.502165   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.002813   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.501582   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.002986   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.501711   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.000984   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.502399   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.002200   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.502369   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.002000   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.655285   56230 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (2.066552827s)
	I1219 03:54:54.655390   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:54:54.655405   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.047371795s)
	I1219 03:54:54.655528   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.023298979s)
	I1219 03:54:54.655657   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.794802456s)
	I1219 03:54:54.655684   56230 addons.go:500] Verifying addon metrics-server=true in "default-k8s-diff-port-168174"
	I1219 03:54:57.969258   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.313828747s)
	I1219 03:54:57.969346   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:58.498709   56230 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-168174"
	I1219 03:54:58.501734   56230 out.go:179] * Verifying dashboard addon...
	I1219 03:54:58.504348   56230 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:54:58.510036   56230 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:54:58.510056   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.010436   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.532022   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.031585   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.531985   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.032925   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.533378   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.032504   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.530653   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.031045   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.531549   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.030879   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.502926   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.001807   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.501672   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.001239   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.501991   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.001622   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.501692   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.002517   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.502331   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.001757   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.508121   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.008244   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.507884   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.008223   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.507861   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.012677   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.507898   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.008121   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.508367   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.008842   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.531235   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.031845   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.531542   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.030822   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.532087   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.032140   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.532095   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.032183   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.532546   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.031699   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.501262   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.001782   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.501640   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.002705   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.501849   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.001647   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.502225   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.002170   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.502397   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.003244   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.508346   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.007493   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.507987   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.007825   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.507635   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.008062   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.507047   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.008442   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.510089   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.008180   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.536198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.032221   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.532227   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.032198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.531813   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.031889   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.531666   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.031122   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.532149   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.031983   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.502642   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.001743   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.502066   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.002044   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.502017   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.002386   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.502467   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.002107   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.502677   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.002161   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.507112   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.008461   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.508312   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.008611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.508384   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.008280   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.508541   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.008623   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.508431   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.009349   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.532619   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.031875   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.532589   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.031244   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.531877   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.031690   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.531758   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.032196   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.531803   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.030943   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.502018   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.002044   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.502072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.002330   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.502958   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.001850   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.501605   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.001853   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.501780   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.001784   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.508124   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.008333   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.508545   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.008130   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.507985   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.007539   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.508141   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.007919   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.507523   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.008842   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.532418   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.032219   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.532547   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.032233   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.532551   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.033166   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.531532   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.031971   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.532050   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.032787   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.501956   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.002220   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.502226   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.003355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.501800   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.001708   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.501127   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.003195   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.502775   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.002072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.507432   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.008746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.508268   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.008770   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.508749   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.009746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.509595   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.008351   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.508700   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.009427   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.532398   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.033297   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.531966   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.032953   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.532813   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.032632   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.531743   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.031446   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.531999   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.032229   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.502200   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.002490   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.502281   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.002814   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.502200   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.001250   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.502303   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.003201   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.501897   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.001740   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.508429   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.008390   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.507941   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.007624   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.508269   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.008250   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.508598   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.008371   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.508380   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.008493   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.531979   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.031753   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.531087   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.031427   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.533856   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.032558   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.532153   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.031923   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.531960   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.032601   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.501692   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.001922   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.501325   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.003828   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.502896   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.002912   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.501760   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.001551   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.503707   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.002109   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.508499   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.009212   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.508512   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.008223   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.508681   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.008636   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.508533   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.008248   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.507749   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.010179   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.531439   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.033650   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.532006   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.033362   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.532163   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.032485   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.532885   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.032795   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.531588   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.032179   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.502338   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.001955   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.502091   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.002307   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.502793   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.000849   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.501606   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.001431   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.502037   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.001873   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.507899   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.009735   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.508708   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.008927   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.508321   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.008289   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.507348   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.009029   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.507232   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.007368   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.532210   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.032304   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.531955   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.031259   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.532301   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.032157   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.531594   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.032495   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.532008   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.032133   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.501770   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.002435   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.502300   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.002307   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.501724   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.002293   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.503636   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.001410   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.504029   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.001789   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.508096   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.009356   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.507852   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.007460   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.508444   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.008364   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.507697   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.008880   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.508861   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.008835   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.532091   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.032010   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.531306   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.031852   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.531186   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.032131   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.531205   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.032529   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.532677   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.033016   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.502472   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.001435   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.502091   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.001734   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.501352   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.002340   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.502315   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.002534   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.501024   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.001249   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.507519   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.008950   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.507774   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.009594   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.507884   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.007928   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.507777   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.009168   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.507455   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.009287   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.532161   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.032066   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.531975   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.031583   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.532654   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.033122   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.531676   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.031185   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.532468   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.032385   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.501786   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.002482   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.502524   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.001342   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.502134   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.003763   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.502136   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.001766   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.502345   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.001599   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.508543   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.009242   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.508054   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.009144   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.508104   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.008088   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.507250   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.009098   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.507611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.010519   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.531780   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.031001   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.532489   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.032242   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.536320   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.033455   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.532129   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.031767   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.531204   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.031365   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.503558   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.001726   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.501144   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.001613   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.502734   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.002274   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.501831   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.001426   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.503884   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.001283   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.508611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.009353   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.507657   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.007664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.507861   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.007544   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.507689   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.008330   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.508469   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.009715   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.532345   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.032801   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.531689   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.032877   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.532081   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.032107   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.531920   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.031409   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.532046   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.032408   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.501828   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.001518   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.502563   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.002507   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.501333   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.002564   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.502379   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.001485   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.501810   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.001402   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.508191   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.008241   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.508833   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.008453   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.508563   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.008613   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.509524   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.008844   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.507854   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.007055   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.532493   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.033676   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.532206   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.031784   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.532118   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.032496   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.532286   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.032724   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.533137   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.031294   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.502666   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.001524   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.501177   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.001644   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.503328   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.002433   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.502361   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.002735   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.501301   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.001765   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.508242   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.008660   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.507962   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.008796   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.508749   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.009651   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.508080   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.008550   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.509473   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.008511   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.533457   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.032379   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.532473   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.032865   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.531464   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.032764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.531236   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.032148   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.532471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.032216   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.502684   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.002188   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.503237   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.001912   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.501622   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.001891   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.502012   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.001650   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.502856   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.001921   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.507699   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.008027   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.508703   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.008209   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.508178   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.008432   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.509550   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.008319   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.507828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.007561   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.531544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.032519   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.532198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.032915   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.531514   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.032723   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.531505   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.033182   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.531615   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.032916   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.501854   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.001080   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.503363   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.002618   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.502840   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.000881   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.501714   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.002610   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.502008   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.001866   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.508549   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.007753   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.508465   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.008319   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.508222   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.007904   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.508163   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.008464   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.508145   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.007728   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.531667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.033191   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.531547   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.032788   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.532591   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.033086   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.531237   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.032101   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.532279   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.032764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.501636   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.001241   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.501915   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.001761   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.501797   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.001851   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.502732   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.001114   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.502538   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.001630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.508503   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.009432   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.508442   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.008564   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.508754   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.008668   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.508947   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.007984   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.507426   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.008776   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.531412   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.031826   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.531169   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.032838   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.531368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.033085   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.531343   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.032505   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.532373   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.032078   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.501962   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.001801   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.502380   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.001940   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.501661   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.001355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.501727   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.002704   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.502515   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.001261   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.508926   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.008697   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.508155   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.008403   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.509752   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.009152   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.507692   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.008539   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.508833   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.008403   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.532212   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.031709   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.531512   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.032880   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.531683   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.032225   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.532187   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.032017   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.530954   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.031969   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.502513   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.001736   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.502118   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.001728   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.502479   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.002783   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.502414   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.002781   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.501809   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.002598   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.507936   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.007414   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.508924   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.007756   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.509607   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.008188   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.508901   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.009164   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.507936   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.007349   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.532294   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.033050   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.532115   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.031971   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.531279   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.032256   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.531863   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.031763   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.531164   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.031290   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.502730   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.001984   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.502287   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.002224   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.502985   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.000948   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.501630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.001169   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.502075   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.002834   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.508225   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.007739   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.508108   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.008481   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.508746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.008298   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.507944   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.008428   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.507905   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.007728   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.531448   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.032595   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.532096   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.031394   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.532851   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.032534   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.532843   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.031994   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.533667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.033061   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.501275   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.003274   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.502492   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.002263   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.501814   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.002188   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.502456   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.002449   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.503413   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.002514   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.508385   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.008219   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.509237   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.007998   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.507734   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.008610   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.509142   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.008330   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.507609   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.009119   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.531626   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.032337   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.532298   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.032378   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.531679   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.032529   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.532155   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.031828   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.531299   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.031239   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.502830   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.001989   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.502331   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.002798   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.502197   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.001852   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.501952   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.001753   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.501421   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.002328   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.508315   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.008862   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.508067   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.008030   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.507755   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.008786   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.507672   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.009365   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.509016   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.007277   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.531667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.031610   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.532096   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.032319   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.532500   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.031773   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.531561   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.032598   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.531974   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.031362   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.502152   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.001130   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.502479   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.002251   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.501762   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.000846   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.502253   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.002765   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.502160   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.001409   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.508190   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.008459   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.508829   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.007664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.509469   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.009747   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.509579   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.009682   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.508738   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.008970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.532197   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.031685   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.532322   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.031885   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.531778   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.031643   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.531467   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.031815   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.531155   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.031720   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.503475   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.001639   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.501436   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.002712   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.501897   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.001181   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.501530   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.000985   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.501730   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.001514   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.508263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.007505   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.508726   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.008230   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.508664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.008997   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.507428   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.008379   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.508549   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.007995   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.531536   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.032617   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.535990   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.031685   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.533156   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.031587   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.532830   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.031367   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.532930   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.031943   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.502386   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.002215   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.503037   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.001428   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.502319   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.002311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.502140   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.002283   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.502150   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.002240   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.507946   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.008416   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.508631   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.008561   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.508912   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.008658   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.509386   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.008665   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.509011   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.008072   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.533032   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.032143   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.531588   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.032371   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.533496   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.033023   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.531133   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.032394   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.532243   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.031898   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.502405   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.001877   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.505174   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.002029   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.502125   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.001824   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.501660   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.002257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.502497   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.002911   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.509042   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.008740   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.507989   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.007873   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.508285   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.007091   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.508238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.508597   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.009516   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.531381   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.032162   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.531649   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.032718   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.532156   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.033496   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.533930   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.032368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.532625   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.032661   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.501952   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.001604   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.501905   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.002311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.501777   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.001546   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.502154   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.002455   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.503055   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.001472   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.508050   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.008080   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.507970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.007844   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.508056   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.007765   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.508456   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.007981   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.508855   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.008604   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.532081   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.032702   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.531078   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.031663   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.531993   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.033077   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.531457   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.032927   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.531699   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.031008   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.502839   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.001682   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.501484   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.003428   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.502649   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.002047   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.501936   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.001951   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.502955   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.002709   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.509628   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.008629   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.509037   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.008098   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.508408   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.009392   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.507832   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.008540   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.509468   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.008988   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.532091   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.032487   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.532767   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.032348   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.533265   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.032832   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.533225   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.032480   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.531859   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.031535   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.502389   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.002611   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.501620   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.002058   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.502778   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.002073   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.501287   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.001492   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.503034   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.002058   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.507218   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.008007   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.507903   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.008002   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.508538   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.009106   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.509031   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.008019   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.508250   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.009604   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.532463   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.032668   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.531757   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.031273   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.533278   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.032950   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.531375   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.032433   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.532764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.031941   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.501829   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.001397   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.502802   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.001851   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.503206   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.001481   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.502653   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.002180   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.501887   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.001927   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.509024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.007589   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.509073   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.008555   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.508449   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.008256   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.508501   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.009916   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.508490   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.008336   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.531904   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.031168   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.532025   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.032276   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.531968   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.032764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.531973   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.031624   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.532201   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.032129   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.502278   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.001507   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.501338   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.002753   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.501620   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.001545   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.502545   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.001650   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.501704   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.001060   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.508006   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.007837   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.509358   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.508132   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.007983   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.508981   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.007803   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.507769   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.009970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.532685   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.033023   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.531348   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.031614   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.533370   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.032795   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.531237   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.032033   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.532778   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.031294   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.502337   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.002204   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.501845   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.002344   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.503195   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.002894   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.501979   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.002008   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.501981   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.001740   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.507806   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.009357   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.508695   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.008959   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.509725   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.008245   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.507606   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.008218   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.507870   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.007087   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.532257   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.032024   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.532220   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.031647   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.532123   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.032889   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.532444   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.032621   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.532943   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.031712   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.501355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.002083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.501469   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.002554   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.501408   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.002216   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.502543   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.001754   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.501454   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.002870   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.507033   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.007862   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.509097   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.008460   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.509108   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.007794   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.508514   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.009784   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.508154   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.008565   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.531552   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.032724   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.531968   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.031728   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.531786   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.032471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.531802   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.032162   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.532320   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.031297   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.503203   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.002682   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.502066   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.001775   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.501403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.002298   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.502073   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.001483   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.501639   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.002266   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.508296   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.008881   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.508078   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.007871   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.508564   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.008609   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.507625   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.008815   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.507996   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.009033   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.531916   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.032003   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.535669   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.032260   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.533368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.032732   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.532734   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.031076   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.531706   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.031411   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.502350   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.002202   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.502113   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.002611   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.501323   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.002251   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.501726   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.003470   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.502490   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.002224   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.507379   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.007665   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.508534   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.009007   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.509344   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.007746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.508532   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.009346   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.507367   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.009828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.532471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.032182   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.531696   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.031891   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.531523   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.032527   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.531544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.033055   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.532251   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.032012   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.501701   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.001815   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.501403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.001721   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.502408   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.006350   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.502718   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.000975   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.502050   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.001993   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.507665   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.010022   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.507891   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.017962   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.509387   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.009499   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.508592   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.007712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.509159   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.008024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.532417   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.032702   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.531960   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.032030   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.532438   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.032562   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.532541   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.031906   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.533707   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.031481   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.501333   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.002706   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.501390   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.003083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.501477   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.003243   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.502051   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.002119   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.502250   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.001892   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.508467   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.007934   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.508461   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.009263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.508676   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.007597   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.508263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.008661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.508545   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.008653   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.533009   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.032493   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.532027   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.032116   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.531261   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.034181   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.531702   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.032409   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.533808   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.031246   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.501444   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.002084   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.501717   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.002397   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.502329   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.002161   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.501724   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.001096   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.501676   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.001373   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.508793   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.009558   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.508307   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.008745   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.508478   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.009365   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.508285   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.008394   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.507659   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.008883   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.531671   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.032663   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.532654   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.032443   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.531860   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.031786   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.532418   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.032031   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.531026   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.031184   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.502311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.001761   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.501921   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.001779   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.502884   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.000815   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.502204   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.002552   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.502487   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.002005   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.509248   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.008315   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.507712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.009764   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.509368   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.007428   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.508548   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.008954   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.508930   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.008936   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.532311   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.032156   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.531768   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.031259   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.532112   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.032440   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.533083   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.031470   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.533077   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.031626   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.503116   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.002138   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.502257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.002721   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.501511   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.002183   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.502306   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.002714   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.501224   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.003247   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.508715   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.008752   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.509114   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.007677   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.508804   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.009618   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.508120   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.007885   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.507480   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.008978   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.532146   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.031615   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.532552   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.031381   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.532386   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.032461   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.533200   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.032375   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.531718   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.030828   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.502028   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.001762   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.501418   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.002914   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.501869   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.001896   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.501339   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.002565   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.502667   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.001134   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.507828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.008203   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.508364   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.008929   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.508117   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.007662   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.507899   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.008710   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.507212   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.008474   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.532845   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.032290   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.532646   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.031957   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.531378   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.032264   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.531928   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.031473   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.532386   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.032382   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.502231   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.002752   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.500970   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.000924   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.501030   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.002189   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.502781   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.002623   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.501117   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.001792   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.508109   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.008892   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.508228   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.007643   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.508278   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.009399   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.508216   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.008474   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.507952   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.008596   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.532465   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.032800   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.531643   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.031616   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.533745   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.031460   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.532616   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.032471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.532228   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.031437   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.501355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.001764   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.501298   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.003052   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.502950   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.001770   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.501738   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.003204   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.503749   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.000964   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.508615   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.009187   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.507594   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.009258   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.508166   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.008876   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.508828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.009323   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.507635   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.008857   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.532499   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.033303   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.532140   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.031451   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.532012   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.031739   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.531969   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.031026   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.531884   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.032850   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.501466   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.002962   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.501319   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.002095   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.501455   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.002904   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.502079   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.002351   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.502139   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.002366   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.507536   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.009458   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.508342   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.008114   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.507689   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.008772   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.508175   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.008253   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.508521   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.010486   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.531019   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.032116   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.531731   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.031746   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.531610   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.032124   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.531488   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.032358   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.532561   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.032192   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.502021   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.001431   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.502831   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.001874   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.501461   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.002135   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.502101   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.002403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.501826   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.001388   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.508693   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.008934   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.507098   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.007956   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.508938   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.007971   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.508613   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.009088   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.507422   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.008448   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.531909   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.031872   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.532556   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.032306   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.532154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.032667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.531742   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.032077   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.531946   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.033451   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.502067   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.002320   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.501957   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.002135   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.501241   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.002784   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.502988   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.004826   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.502313   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.002638   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.507745   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.009163   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.508092   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.008607   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.508116   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.008371   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.507434   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.008847   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.507621   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.008655   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.532124   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.032109   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.531627   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.031388   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.532769   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.031521   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.531483   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.032091   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.532187   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.031753   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.502460   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.002540   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.501945   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.002223   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.501542   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.001659   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.501286   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.002482   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.502722   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.001266   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.507988   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.009496   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.509180   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.008698   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.508772   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.008904   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.508816   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.009066   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.507818   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.008395   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.531785   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.031722   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.531144   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.031857   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.531058   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.032168   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.532777   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.032608   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.531658   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.032994   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.501701   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.002308   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.502069   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.002072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.501731   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.002148   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.503078   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.003123   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.501899   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.002103   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.507702   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.009409   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.508752   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.009166   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.508117   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.009342   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.508229   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.007650   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.514151   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.008149   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.531183   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.030952   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.531064   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.032714   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.532410   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.031666   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.531454   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.032368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.532161   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.031779   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.502176   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.001419   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.502257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.002485   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.501904   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.001645   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.501262   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.002789   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.502720   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.001933   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.507580   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.008671   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.508761   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.009888   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.508049   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.009018   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.508299   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.009024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.507584   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.008065   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.530966   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.031880   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.531265   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.031652   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.532860   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.031804   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.532296   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.031908   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.531566   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.031333   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.501384   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.002328   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.501432   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.002402   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.502445   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.004922   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.501916   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.002619   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.501038   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.001821   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.507960   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.008882   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.508735   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.009370   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.508266   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.009541   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.508167   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.008293   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.509228   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.008514   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.531404   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.032313   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.532704   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.033420   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.532159   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.032178   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.531613   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.035741   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.532501   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.033104   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.502173   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.002026   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.501239   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.001300   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.503227   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.001826   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.501434   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.003235   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.502432   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.002356   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.507884   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.008334   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.508168   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.008274   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.508025   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.008228   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.507713   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.008537   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.508684   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.009919   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.532599   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.035420   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.531992   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.031944   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.531194   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.032224   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.531672   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.031544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.531967   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.031448   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.501782   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.001444   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.503454   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.002767   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.501906   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.001726   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.502123   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.005942   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.501817   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.001941   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.507853   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.008476   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.508667   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.008722   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.509046   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.008778   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.508906   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.008492   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.508647   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.007815   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.532200   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.031966   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.531791   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.033536   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.532652   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.032201   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.531928   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.033359   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.533670   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.032187   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.501934   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.002902   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.501267   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.002601   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.501489   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.002545   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.501360   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.002042   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.503032   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.001085   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.507611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.008196   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.509732   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.009055   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.508388   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.007995   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.507537   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.008854   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.508167   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.008019   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.531647   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.034444   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.532628   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.032333   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.531736   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.032056   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.531916   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.031464   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.532198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.032089   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.501603   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.001216   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.502879   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.001292   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.501341   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.002410   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.502804   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.002021   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.502279   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.002340   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.507566   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.008774   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.509162   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.009209   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.507648   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.009824   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.508631   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.009013   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.507653   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.008464   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.531694   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.032157   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.532431   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.031890   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.533074   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.032602   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.531803   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.032839   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.531064   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.033390   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.502372   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.001862   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.502294   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.001477   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.503184   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.002218   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.502643   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.001877   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.503311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.002436   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.508296   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.008304   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.508381   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.008490   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.508829   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.007834   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.508400   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.008794   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.509376   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.008146   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.531920   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.033659   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.532892   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.031391   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.532537   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.033029   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.530956   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.031585   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.533148   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.031532   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.502341   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.002087   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.501994   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.001651   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.501441   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.002140   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.501765   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.001241   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.502479   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.002437   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.508235   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.008483   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.508534   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.008744   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.508702   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.008924   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.507985   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.007421   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.507911   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.008590   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.532045   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.031418   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.532867   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.031333   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.532360   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.032704   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.531535   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.033276   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.532090   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.032674   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.503195   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.001544   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.501650   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.001446   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.503141   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.001293   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.501933   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.001485   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.501393   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.001793   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.508830   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.008286   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.508322   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.008679   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.509263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.008010   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.507661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.008954   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.508712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.008648   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.531115   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.033681   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.532204   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.031525   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.532706   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.031154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.531400   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.032686   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.531016   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.031694   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.500799   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.001437   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.503087   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.001262   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.502070   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.001597   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.501748   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.000952   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.503068   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.002924   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.508721   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.009360   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.507561   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.008024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.509438   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.008003   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.509182   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.007694   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.509204   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.008075   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.531475   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.032236   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.531623   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.032627   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.531328   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.032263   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.531544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.031759   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.531968   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.031169   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.502523   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.001089   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.502166   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.002297   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.501900   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.002177   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.501990   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.001761   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.503411   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.001888   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.508661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.008645   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.509700   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.509485   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.008711   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.508528   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.009157   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.508329   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.008371   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.532470   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.033506   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.532332   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.032618   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.532408   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.032700   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.532680   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.030763   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.531486   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.032694   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.501870   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.001255   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.502146   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.001892   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.502373   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.001923   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.502476   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.001982   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.502446   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.003222   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.508346   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.008513   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.509470   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.009002   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.508067   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.007514   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.508798   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.008828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.508496   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.531146   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.031591   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.532375   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.033082   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.531649   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.031902   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.532588   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.032880   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.532136   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.028606   55595 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 04:00:01.028642   55595 kapi.go:107] duration metric: took 6m0.000598506s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 04:00:01.028754   55595 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 04:00:01.030295   55595 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1219 04:00:01.031288   55595 addons.go:546] duration metric: took 6m6.695311639s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1219 04:00:01.031318   55595 start.go:247] waiting for cluster config update ...
	I1219 04:00:01.031329   55595 start.go:256] writing updated cluster config ...
	I1219 04:00:01.031596   55595 ssh_runner.go:195] Run: rm -f paused
	I1219 04:00:01.039401   55595 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:01.043907   55595 pod_ready.go:83] waiting for pod "coredns-7d764666f9-s7729" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.050711   55595 pod_ready.go:94] pod "coredns-7d764666f9-s7729" is "Ready"
	I1219 04:00:01.050733   55595 pod_ready.go:86] duration metric: took 6.803187ms for pod "coredns-7d764666f9-s7729" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.053765   55595 pod_ready.go:83] waiting for pod "etcd-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.058213   55595 pod_ready.go:94] pod "etcd-no-preload-298059" is "Ready"
	I1219 04:00:01.058234   55595 pod_ready.go:86] duration metric: took 4.447718ms for pod "etcd-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.060300   55595 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.065142   55595 pod_ready.go:94] pod "kube-apiserver-no-preload-298059" is "Ready"
	I1219 04:00:01.065166   55595 pod_ready.go:86] duration metric: took 4.840116ms for pod "kube-apiserver-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.067284   55595 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.445171   55595 pod_ready.go:94] pod "kube-controller-manager-no-preload-298059" is "Ready"
	I1219 04:00:01.445200   55595 pod_ready.go:86] duration metric: took 377.900542ms for pod "kube-controller-manager-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.645417   55595 pod_ready.go:83] waiting for pod "kube-proxy-mdfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.044330   55595 pod_ready.go:94] pod "kube-proxy-mdfxl" is "Ready"
	I1219 04:00:02.044377   55595 pod_ready.go:86] duration metric: took 398.907218ms for pod "kube-proxy-mdfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.245766   55595 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.645879   55595 pod_ready.go:94] pod "kube-scheduler-no-preload-298059" is "Ready"
	I1219 04:00:02.645937   55595 pod_ready.go:86] duration metric: took 400.143888ms for pod "kube-scheduler-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.645954   55595 pod_ready.go:40] duration metric: took 1.606522986s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:02.697158   55595 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 04:00:02.698980   55595 out.go:179] * Done! kubectl is now configured to use "no-preload-298059" cluster and "default" namespace by default
	I1219 03:59:58.502543   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.001139   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.501649   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.001415   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.502374   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.002272   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.502072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.002694   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.501377   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.002499   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.508999   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.009465   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.508462   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.008805   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.509068   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.007682   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.508807   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.009533   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.509171   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.008344   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.501482   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.002080   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.502514   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.002257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.502741   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.001565   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.502968   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.002364   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.502630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.001239   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.508168   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.007952   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.508714   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.008805   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.508239   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.009278   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.509811   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.008945   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.513267   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.008127   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.502641   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.002630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.501272   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.001592   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.502177   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.002030   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.501972   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.001917   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.502061   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.001824   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.508106   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.007937   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.507970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.008418   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.508614   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.007994   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.508452   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.008632   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.510343   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.008029   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.501559   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.000819   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.501592   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.002062   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.501962   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.001720   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.502083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.002024   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.501681   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.001502   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.507866   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.009254   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.508704   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.008650   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.508846   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.010798   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.507933   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.009073   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.508337   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.008331   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.502462   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.003975   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.501373   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.002075   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.502437   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.001953   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.501417   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.003083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.501515   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.001553   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.509712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.008196   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.507361   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.008284   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.508302   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.007728   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.509259   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.008711   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.509664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.008507   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.502079   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.001986   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.501922   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.001179   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.502972   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.002165   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.502809   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.001369   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.502083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.002507   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.508264   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.008006   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.509488   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.008519   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.508978   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.008309   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.508775   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.009625   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.508731   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.009043   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.502787   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.001831   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.502430   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.998860   55957 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 04:00:29.998886   55957 kapi.go:107] duration metric: took 6m0.000824832s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 04:00:29.998960   55957 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 04:00:30.000498   55957 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1219 04:00:30.001513   55957 addons.go:546] duration metric: took 6m7.141140342s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1219 04:00:30.001540   55957 start.go:247] waiting for cluster config update ...
	I1219 04:00:30.001550   55957 start.go:256] writing updated cluster config ...
	I1219 04:00:30.001800   55957 ssh_runner.go:195] Run: rm -f paused
	I1219 04:00:30.010656   55957 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:30.015390   55957 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9ptrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.020029   55957 pod_ready.go:94] pod "coredns-66bc5c9577-9ptrv" is "Ready"
	I1219 04:00:30.020051   55957 pod_ready.go:86] duration metric: took 4.638733ms for pod "coredns-66bc5c9577-9ptrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.022246   55957 pod_ready.go:83] waiting for pod "etcd-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.026208   55957 pod_ready.go:94] pod "etcd-embed-certs-244717" is "Ready"
	I1219 04:00:30.026224   55957 pod_ready.go:86] duration metric: took 3.954396ms for pod "etcd-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.028026   55957 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.033934   55957 pod_ready.go:94] pod "kube-apiserver-embed-certs-244717" is "Ready"
	I1219 04:00:30.033951   55957 pod_ready.go:86] duration metric: took 5.905842ms for pod "kube-apiserver-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.036019   55957 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.417680   55957 pod_ready.go:94] pod "kube-controller-manager-embed-certs-244717" is "Ready"
	I1219 04:00:30.417709   55957 pod_ready.go:86] duration metric: took 381.673199ms for pod "kube-controller-manager-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.616122   55957 pod_ready.go:83] waiting for pod "kube-proxy-p8gvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.015548   55957 pod_ready.go:94] pod "kube-proxy-p8gvm" is "Ready"
	I1219 04:00:31.015585   55957 pod_ready.go:86] duration metric: took 399.442531ms for pod "kube-proxy-p8gvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.216107   55957 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.615784   55957 pod_ready.go:94] pod "kube-scheduler-embed-certs-244717" is "Ready"
	I1219 04:00:31.615816   55957 pod_ready.go:86] duration metric: took 399.682179ms for pod "kube-scheduler-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.615832   55957 pod_ready.go:40] duration metric: took 1.605153664s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:31.662639   55957 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 04:00:31.664208   55957 out.go:179] * Done! kubectl is now configured to use "embed-certs-244717" cluster and "default" namespace by default
	I1219 04:00:29.508455   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:30.007925   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:30.507876   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:31.007766   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:31.509691   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:32.008321   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:32.509128   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:33.008511   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:33.509110   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:34.008834   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:34.508661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:35.009145   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:35.510268   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:36.007810   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:36.508457   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:37.008511   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:37.508340   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:38.008906   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:38.508226   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:39.007515   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:39.508398   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:40.008048   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:40.507411   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:41.008044   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:41.509491   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:42.008720   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:42.508893   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:43.008890   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:43.507746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:44.008735   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:44.508515   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:45.008316   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:45.508925   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:46.007410   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:46.507809   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:47.007816   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:47.507934   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:48.008317   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:48.511438   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:49.008355   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:49.508479   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:50.008867   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:50.507492   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:51.008220   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:51.508283   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:52.008800   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:52.508617   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:53.008464   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:53.508878   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:54.008198   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:54.509007   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:55.007919   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:55.507118   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:56.008201   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:56.507989   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:57.007872   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:57.508142   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:58.008008   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:58.504601   56230 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 04:00:58.504633   56230 kapi.go:107] duration metric: took 6m0.000289249s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 04:00:58.504722   56230 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 04:00:58.506261   56230 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1219 04:00:58.507432   56230 addons.go:546] duration metric: took 6m6.536744168s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1219 04:00:58.507471   56230 start.go:247] waiting for cluster config update ...
	I1219 04:00:58.507487   56230 start.go:256] writing updated cluster config ...
	I1219 04:00:58.507818   56230 ssh_runner.go:195] Run: rm -f paused
	I1219 04:00:58.516094   56230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:58.521203   56230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dnfcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.526011   56230 pod_ready.go:94] pod "coredns-66bc5c9577-dnfcc" is "Ready"
	I1219 04:00:58.526035   56230 pod_ready.go:86] duration metric: took 4.809568ms for pod "coredns-66bc5c9577-dnfcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.528592   56230 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.534102   56230 pod_ready.go:94] pod "etcd-default-k8s-diff-port-168174" is "Ready"
	I1219 04:00:58.534119   56230 pod_ready.go:86] duration metric: took 5.507213ms for pod "etcd-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.536078   56230 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.540931   56230 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-168174" is "Ready"
	I1219 04:00:58.540951   56230 pod_ready.go:86] duration metric: took 4.854792ms for pod "kube-apiserver-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.542905   56230 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.920520   56230 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-168174" is "Ready"
	I1219 04:00:58.920546   56230 pod_ready.go:86] duration metric: took 377.623833ms for pod "kube-controller-manager-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:59.120738   56230 pod_ready.go:83] waiting for pod "kube-proxy-zs4wg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:59.520222   56230 pod_ready.go:94] pod "kube-proxy-zs4wg" is "Ready"
	I1219 04:00:59.520254   56230 pod_ready.go:86] duration metric: took 399.487462ms for pod "kube-proxy-zs4wg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:59.721383   56230 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:01:00.120982   56230 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-168174" is "Ready"
	I1219 04:01:00.121009   56230 pod_ready.go:86] duration metric: took 399.598924ms for pod "kube-scheduler-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:01:00.121020   56230 pod_ready.go:40] duration metric: took 1.604899766s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:01:00.167943   56230 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 04:01:00.169437   56230 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-168174" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.768720263Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766116976768697037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:198516,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7df806fb-4dd1-4ba0-8f38-ad140ca8be63 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.770050191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7b7fc99-d334-4463-839f-d965bd9f432a name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.770157215Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7b7fc99-d334-4463-839f-d965bd9f432a name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.770566908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ba53a084d3418609c316a45fc2fb31ecffa561d929d3c31c11897926152f7f7,PodSandboxId:1d41e1adeda344c2b6bf312ecd196cb3f294b1dd0940b74440188e728e4f8236,Metadata:&ContainerMetadata{Name:proxy,Attempt:0,},Image:&ImageSpec{Image:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_RUNNING,CreatedAt:1766116455714758363,Labels:map[string]string{io.kubernetes.container.name: proxy,io.kubernetes.pod.name: kubernetes-dashboard-kong-f487b85cd-6h64p,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ce63b698-0626-4cdb-9035-f9f905d770cf,},Annotations:map[string]string{io.kubernetes.container.hash: f625957b,io.kubernetes.container.ports: [{\"name\":\"proxy-
tls\",\"containerPort\":8443,\"protocol\":\"TCP\"},{\"name\":\"status\",\"containerPort\":8100,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"kong\",\"quit\",\"--wait=15\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b3a5531151deeda30e8690a92f8af8076432b199545941605702c1654d71ca,PodSandboxId:1d41e1adeda344c2b6bf312ecd196cb3f294b1dd0940b74440188e728e4f8236,Metadata:&ContainerMetadata{Name:clear-stale-pid,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_EXITED,CreatedAt:1766116454700263443,Labels:map[string]string{io.kubernetes.container.name: clear-
stale-pid,io.kubernetes.pod.name: kubernetes-dashboard-kong-f487b85cd-6h64p,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ce63b698-0626-4cdb-9035-f9f905d770cf,},Annotations:map[string]string{io.kubernetes.container.hash: fc5d9912,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d52d8be50dac733ff4ec86c05667c686a18577fa43f59488bb796d1a5f17cc,PodSandboxId:d8561e78594775abe17c477f444bda5ccdf4feec623c89b0dd77dd900a2cf47a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116443517547989,Labels:map[string]string{io.kubernetes.contain
er.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6361f87-8123-46eb-8f28-110eb02b1927,},Annotations:map[string]string{io.kubernetes.container.hash: 158eaf00,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278a539192a9e7c904fb20462e7177dd70e833a4bcfc50bfda225435b77cdac4,PodSandboxId:ac4f5da2781b4b44297b4d7adecf74ff7c3ad33beaee16395c5e4882e53eed9f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-api,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0607af4fcd8ae78708c5bd51f34ccf8442967b8e41f56f008cc2884690f2f3b,State:CONTAINER_RUNNING,CreatedAt:1766116442753753567,Labels:ma
p[string]string{io.kubernetes.container.name: kubernetes-dashboard-api,io.kubernetes.pod.name: kubernetes-dashboard-api-56d75ddbb-tppfn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 185d3f92-b7c0-43d3-a6f5-b9d52d8d41c0,},Annotations:map[string]string{io.kubernetes.container.hash: 547c95c6,io.kubernetes.container.ports: [{\"name\":\"api\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c52846cd4715f53aa4a5f474e43006e8c5483d90b1849df8dc0075dff1b6083c,PodSandboxId:88153576fecac10bb0a65a4a0730fde12e8880a90d8a049f71fef56dd129ead9,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-auth,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1,State:CONTAINER_RUNNING,CreatedAt:1766116439276415330,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-auth,io.kubernetes.pod.name: kubernetes-dashboard-auth-84ff87fdd5-zd9bz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1c55220d-544f-4947-b5c5-afdb85361029,},Annotations:map[string]string{io.kubernetes.container.hash: e1c595d7,io.kubernetes.container.ports: [{\"name\":\"auth\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633226fb3e30d41ed72df6ade31838a480d47853a5dd02fecf2e176fc3b823b6,PodSandboxId:f31a88a220665e883c5e31493836838da73f09f4c18d8866a75898e7c9c7feb4,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-metrics-scrap
er,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d9cbc9f4053ca11fa6edb814b512ab807a89346934700a3a425fc1e24a3f2167,State:CONTAINER_RUNNING,CreatedAt:1766116436190115082,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-metrics-scraper,io.kubernetes.pod.name: kubernetes-dashboard-metrics-scraper-6b5c7dc479-5rsmg,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 43deb4b4-c100-4378-917c-1eefb131c216,},Annotations:map[string]string{io.kubernetes.container.hash: 33f81994,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1bca547a
8cb13056ce6f41cd3d2827332c49678081ed4fdf1e5f07e548e8e9,PodSandboxId:5896d1e86f1716bc991865708d89f4e07225e965a5eae42250327484c19431a8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-web,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06,State:CONTAINER_RUNNING,CreatedAt:1766116432981249997,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-web,io.kubernetes.pod.name: kubernetes-dashboard-web-858bd7466-c5kzr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 5ae4b527-2bef-4144-a371-dc2f4116a3c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1b2182b9,io.kubernetes.container.ports: [{\"name\":\"web\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796faeaf498a0cf66778ac0bfdca4997a21c3fa259a33a7e4ed26351885ee0c9,PodSandboxId:7b093825f1fe4dd66c14ac2c10e7605906deab68f3acb0588b21657d6fea391e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116423691471037,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e1fc5999-4caf-496d-a302-707570e1f019,},Annotations:map[string]string{io.kubernetes.container.hash: c60f0de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25458f0b01a86f841f0b80d9682d174573303f6a35d4a678193134bd34277761,PodSandboxId:fb0cbab3c54c59215d3f9b0027ec49fe8b34334a5cff2f1f4397a3758d14a620,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1766116420263737361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwzpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4598eae-ba53-4173-bc4a-535887dc6d10,},Annotations:map[string]string{io.kubernetes.container.hash: fffff994,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c23bdc763b01672e6cb7e934236bc23e89892665c4518cce1a92bfd3230cd50,PodSandboxId:d8561e78594775abe17c477f444bda5ccdf4feec623c89b0dd77dd900a2cf47a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116412592374221,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6361f87-812
3-46eb-8f28-110eb02b1927,},Annotations:map[string]string{io.kubernetes.container.hash: 158eaf00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba408cece6208ddeb9327287a76e8a3d4f03bcc10c4b233b10f9959864c891b5,PodSandboxId:4d2c29ff4a2ed8cef6afbbd6aae68bc1475bd43ce62d8b9ac7d89237ae86f824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a,State:CONTAINER_RUNNING,CreatedAt:1766116412573827827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4c59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d1adab-9314-4395-95f2-1ca383aefee1,},Annota
tions:map[string]string{io.kubernetes.container.hash: a49c72a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f352655401bed294bcff033ea5d2aa2a0bdf87de5fbb1206e8731b680ce582c,PodSandboxId:2ed304f66f5d1100845fc56cd8d93da79cff42aca9a0c55c230af2194535cb4c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157,State:CONTAINER_RUNNING,CreatedAt:1766116408686391475,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb606b3e2663e111185ab803de8836,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2df5c40d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6833537f6aeed3fadb0f5baf15f935306cc5b6f0468e6e56b3922330e158ee,PodSandboxId:022dd0dc606224b9ea84f65a61d92ae969d481f95b322acbe2df9b403d830644,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1766116408697700650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31afd2a7541d649fb566c092b7f15410,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 5ddae9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d28987fec36e57788da0534ff938f4677376fcea5c129e696240ae6fc0ed015,PodSandboxId:31bb13c0703b5ea7fbb41944a5204406197098847c44617acb41400f918cd15d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95,State:CONTAINER_RUNNING,CreatedAt:1766116408617770483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b78f15083684b811afd94e8a9518e12,},Annotations:map[string]string{io.kubernetes.container.hash:
404b5e1f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00fed501023e3502b030b4bb79ef632a05bce26dfa057b799045368d55f79e28,PodSandboxId:3cf899f74093f994cd317a64f629e1e64bf7e1d45d395ad5b5d6acfedd333e46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62,State:CONTAINER_RUNNING,CreatedAt:1766116408638461352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5474cd52daad06c999fd54796fd62f6,},Annotations:map[string]string{io.kubernet
es.container.hash: 78dfeacb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7b7fc99-d334-4463-839f-d965bd9f432a name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.805674096Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c05e1d85-51ba-4d16-9808-9d9d682622ee name=/runtime.v1.RuntimeService/Version
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.805763220Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c05e1d85-51ba-4d16-9808-9d9d682622ee name=/runtime.v1.RuntimeService/Version
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.807699285Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a538484d-c705-4299-be1b-629c649a4cdc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.808357123Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766116976808334343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:198516,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a538484d-c705-4299-be1b-629c649a4cdc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.809499776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5438736-27bc-4855-8a74-13314afec204 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.809613716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5438736-27bc-4855-8a74-13314afec204 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.809967401Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ba53a084d3418609c316a45fc2fb31ecffa561d929d3c31c11897926152f7f7,PodSandboxId:1d41e1adeda344c2b6bf312ecd196cb3f294b1dd0940b74440188e728e4f8236,Metadata:&ContainerMetadata{Name:proxy,Attempt:0,},Image:&ImageSpec{Image:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_RUNNING,CreatedAt:1766116455714758363,Labels:map[string]string{io.kubernetes.container.name: proxy,io.kubernetes.pod.name: kubernetes-dashboard-kong-f487b85cd-6h64p,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ce63b698-0626-4cdb-9035-f9f905d770cf,},Annotations:map[string]string{io.kubernetes.container.hash: f625957b,io.kubernetes.container.ports: [{\"name\":\"proxy-
tls\",\"containerPort\":8443,\"protocol\":\"TCP\"},{\"name\":\"status\",\"containerPort\":8100,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"kong\",\"quit\",\"--wait=15\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b3a5531151deeda30e8690a92f8af8076432b199545941605702c1654d71ca,PodSandboxId:1d41e1adeda344c2b6bf312ecd196cb3f294b1dd0940b74440188e728e4f8236,Metadata:&ContainerMetadata{Name:clear-stale-pid,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_EXITED,CreatedAt:1766116454700263443,Labels:map[string]string{io.kubernetes.container.name: clear-
stale-pid,io.kubernetes.pod.name: kubernetes-dashboard-kong-f487b85cd-6h64p,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ce63b698-0626-4cdb-9035-f9f905d770cf,},Annotations:map[string]string{io.kubernetes.container.hash: fc5d9912,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d52d8be50dac733ff4ec86c05667c686a18577fa43f59488bb796d1a5f17cc,PodSandboxId:d8561e78594775abe17c477f444bda5ccdf4feec623c89b0dd77dd900a2cf47a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116443517547989,Labels:map[string]string{io.kubernetes.contain
er.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6361f87-8123-46eb-8f28-110eb02b1927,},Annotations:map[string]string{io.kubernetes.container.hash: 158eaf00,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278a539192a9e7c904fb20462e7177dd70e833a4bcfc50bfda225435b77cdac4,PodSandboxId:ac4f5da2781b4b44297b4d7adecf74ff7c3ad33beaee16395c5e4882e53eed9f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-api,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0607af4fcd8ae78708c5bd51f34ccf8442967b8e41f56f008cc2884690f2f3b,State:CONTAINER_RUNNING,CreatedAt:1766116442753753567,Labels:ma
p[string]string{io.kubernetes.container.name: kubernetes-dashboard-api,io.kubernetes.pod.name: kubernetes-dashboard-api-56d75ddbb-tppfn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 185d3f92-b7c0-43d3-a6f5-b9d52d8d41c0,},Annotations:map[string]string{io.kubernetes.container.hash: 547c95c6,io.kubernetes.container.ports: [{\"name\":\"api\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c52846cd4715f53aa4a5f474e43006e8c5483d90b1849df8dc0075dff1b6083c,PodSandboxId:88153576fecac10bb0a65a4a0730fde12e8880a90d8a049f71fef56dd129ead9,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-auth,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1,State:CONTAINER_RUNNING,CreatedAt:1766116439276415330,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-auth,io.kubernetes.pod.name: kubernetes-dashboard-auth-84ff87fdd5-zd9bz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1c55220d-544f-4947-b5c5-afdb85361029,},Annotations:map[string]string{io.kubernetes.container.hash: e1c595d7,io.kubernetes.container.ports: [{\"name\":\"auth\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633226fb3e30d41ed72df6ade31838a480d47853a5dd02fecf2e176fc3b823b6,PodSandboxId:f31a88a220665e883c5e31493836838da73f09f4c18d8866a75898e7c9c7feb4,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-metrics-scrap
er,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d9cbc9f4053ca11fa6edb814b512ab807a89346934700a3a425fc1e24a3f2167,State:CONTAINER_RUNNING,CreatedAt:1766116436190115082,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-metrics-scraper,io.kubernetes.pod.name: kubernetes-dashboard-metrics-scraper-6b5c7dc479-5rsmg,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 43deb4b4-c100-4378-917c-1eefb131c216,},Annotations:map[string]string{io.kubernetes.container.hash: 33f81994,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1bca547a
8cb13056ce6f41cd3d2827332c49678081ed4fdf1e5f07e548e8e9,PodSandboxId:5896d1e86f1716bc991865708d89f4e07225e965a5eae42250327484c19431a8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-web,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06,State:CONTAINER_RUNNING,CreatedAt:1766116432981249997,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-web,io.kubernetes.pod.name: kubernetes-dashboard-web-858bd7466-c5kzr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 5ae4b527-2bef-4144-a371-dc2f4116a3c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1b2182b9,io.kubernetes.container.ports: [{\"name\":\"web\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796faeaf498a0cf66778ac0bfdca4997a21c3fa259a33a7e4ed26351885ee0c9,PodSandboxId:7b093825f1fe4dd66c14ac2c10e7605906deab68f3acb0588b21657d6fea391e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116423691471037,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e1fc5999-4caf-496d-a302-707570e1f019,},Annotations:map[string]string{io.kubernetes.container.hash: c60f0de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25458f0b01a86f841f0b80d9682d174573303f6a35d4a678193134bd34277761,PodSandboxId:fb0cbab3c54c59215d3f9b0027ec49fe8b34334a5cff2f1f4397a3758d14a620,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1766116420263737361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwzpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4598eae-ba53-4173-bc4a-535887dc6d10,},Annotations:map[string]string{io.kubernetes.container.hash: fffff994,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c23bdc763b01672e6cb7e934236bc23e89892665c4518cce1a92bfd3230cd50,PodSandboxId:d8561e78594775abe17c477f444bda5ccdf4feec623c89b0dd77dd900a2cf47a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116412592374221,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6361f87-812
3-46eb-8f28-110eb02b1927,},Annotations:map[string]string{io.kubernetes.container.hash: 158eaf00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba408cece6208ddeb9327287a76e8a3d4f03bcc10c4b233b10f9959864c891b5,PodSandboxId:4d2c29ff4a2ed8cef6afbbd6aae68bc1475bd43ce62d8b9ac7d89237ae86f824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a,State:CONTAINER_RUNNING,CreatedAt:1766116412573827827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4c59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d1adab-9314-4395-95f2-1ca383aefee1,},Annota
tions:map[string]string{io.kubernetes.container.hash: a49c72a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f352655401bed294bcff033ea5d2aa2a0bdf87de5fbb1206e8731b680ce582c,PodSandboxId:2ed304f66f5d1100845fc56cd8d93da79cff42aca9a0c55c230af2194535cb4c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157,State:CONTAINER_RUNNING,CreatedAt:1766116408686391475,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb606b3e2663e111185ab803de8836,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2df5c40d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6833537f6aeed3fadb0f5baf15f935306cc5b6f0468e6e56b3922330e158ee,PodSandboxId:022dd0dc606224b9ea84f65a61d92ae969d481f95b322acbe2df9b403d830644,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1766116408697700650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31afd2a7541d649fb566c092b7f15410,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 5ddae9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d28987fec36e57788da0534ff938f4677376fcea5c129e696240ae6fc0ed015,PodSandboxId:31bb13c0703b5ea7fbb41944a5204406197098847c44617acb41400f918cd15d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95,State:CONTAINER_RUNNING,CreatedAt:1766116408617770483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b78f15083684b811afd94e8a9518e12,},Annotations:map[string]string{io.kubernetes.container.hash:
404b5e1f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00fed501023e3502b030b4bb79ef632a05bce26dfa057b799045368d55f79e28,PodSandboxId:3cf899f74093f994cd317a64f629e1e64bf7e1d45d395ad5b5d6acfedd333e46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62,State:CONTAINER_RUNNING,CreatedAt:1766116408638461352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5474cd52daad06c999fd54796fd62f6,},Annotations:map[string]string{io.kubernet
es.container.hash: 78dfeacb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5438736-27bc-4855-8a74-13314afec204 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.846651681Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9ea3bf5-31de-403c-b717-245ba9de930f name=/runtime.v1.RuntimeService/Version
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.846738768Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9ea3bf5-31de-403c-b717-245ba9de930f name=/runtime.v1.RuntimeService/Version
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.847983025Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=866c1dbd-71c7-44d7-a38f-e40e448d913d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.848764179Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766116976848738807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:198516,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=866c1dbd-71c7-44d7-a38f-e40e448d913d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.849656991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6276129a-ca34-4ca9-a018-6dac4bf67791 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.849711315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6276129a-ca34-4ca9-a018-6dac4bf67791 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.850051588Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ba53a084d3418609c316a45fc2fb31ecffa561d929d3c31c11897926152f7f7,PodSandboxId:1d41e1adeda344c2b6bf312ecd196cb3f294b1dd0940b74440188e728e4f8236,Metadata:&ContainerMetadata{Name:proxy,Attempt:0,},Image:&ImageSpec{Image:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_RUNNING,CreatedAt:1766116455714758363,Labels:map[string]string{io.kubernetes.container.name: proxy,io.kubernetes.pod.name: kubernetes-dashboard-kong-f487b85cd-6h64p,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ce63b698-0626-4cdb-9035-f9f905d770cf,},Annotations:map[string]string{io.kubernetes.container.hash: f625957b,io.kubernetes.container.ports: [{\"name\":\"proxy-
tls\",\"containerPort\":8443,\"protocol\":\"TCP\"},{\"name\":\"status\",\"containerPort\":8100,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"kong\",\"quit\",\"--wait=15\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b3a5531151deeda30e8690a92f8af8076432b199545941605702c1654d71ca,PodSandboxId:1d41e1adeda344c2b6bf312ecd196cb3f294b1dd0940b74440188e728e4f8236,Metadata:&ContainerMetadata{Name:clear-stale-pid,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_EXITED,CreatedAt:1766116454700263443,Labels:map[string]string{io.kubernetes.container.name: clear-
stale-pid,io.kubernetes.pod.name: kubernetes-dashboard-kong-f487b85cd-6h64p,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ce63b698-0626-4cdb-9035-f9f905d770cf,},Annotations:map[string]string{io.kubernetes.container.hash: fc5d9912,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d52d8be50dac733ff4ec86c05667c686a18577fa43f59488bb796d1a5f17cc,PodSandboxId:d8561e78594775abe17c477f444bda5ccdf4feec623c89b0dd77dd900a2cf47a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116443517547989,Labels:map[string]string{io.kubernetes.contain
er.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6361f87-8123-46eb-8f28-110eb02b1927,},Annotations:map[string]string{io.kubernetes.container.hash: 158eaf00,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278a539192a9e7c904fb20462e7177dd70e833a4bcfc50bfda225435b77cdac4,PodSandboxId:ac4f5da2781b4b44297b4d7adecf74ff7c3ad33beaee16395c5e4882e53eed9f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-api,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0607af4fcd8ae78708c5bd51f34ccf8442967b8e41f56f008cc2884690f2f3b,State:CONTAINER_RUNNING,CreatedAt:1766116442753753567,Labels:ma
p[string]string{io.kubernetes.container.name: kubernetes-dashboard-api,io.kubernetes.pod.name: kubernetes-dashboard-api-56d75ddbb-tppfn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 185d3f92-b7c0-43d3-a6f5-b9d52d8d41c0,},Annotations:map[string]string{io.kubernetes.container.hash: 547c95c6,io.kubernetes.container.ports: [{\"name\":\"api\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c52846cd4715f53aa4a5f474e43006e8c5483d90b1849df8dc0075dff1b6083c,PodSandboxId:88153576fecac10bb0a65a4a0730fde12e8880a90d8a049f71fef56dd129ead9,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-auth,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1,State:CONTAINER_RUNNING,CreatedAt:1766116439276415330,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-auth,io.kubernetes.pod.name: kubernetes-dashboard-auth-84ff87fdd5-zd9bz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1c55220d-544f-4947-b5c5-afdb85361029,},Annotations:map[string]string{io.kubernetes.container.hash: e1c595d7,io.kubernetes.container.ports: [{\"name\":\"auth\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633226fb3e30d41ed72df6ade31838a480d47853a5dd02fecf2e176fc3b823b6,PodSandboxId:f31a88a220665e883c5e31493836838da73f09f4c18d8866a75898e7c9c7feb4,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-metrics-scrap
er,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d9cbc9f4053ca11fa6edb814b512ab807a89346934700a3a425fc1e24a3f2167,State:CONTAINER_RUNNING,CreatedAt:1766116436190115082,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-metrics-scraper,io.kubernetes.pod.name: kubernetes-dashboard-metrics-scraper-6b5c7dc479-5rsmg,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 43deb4b4-c100-4378-917c-1eefb131c216,},Annotations:map[string]string{io.kubernetes.container.hash: 33f81994,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1bca547a
8cb13056ce6f41cd3d2827332c49678081ed4fdf1e5f07e548e8e9,PodSandboxId:5896d1e86f1716bc991865708d89f4e07225e965a5eae42250327484c19431a8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-web,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06,State:CONTAINER_RUNNING,CreatedAt:1766116432981249997,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-web,io.kubernetes.pod.name: kubernetes-dashboard-web-858bd7466-c5kzr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 5ae4b527-2bef-4144-a371-dc2f4116a3c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1b2182b9,io.kubernetes.container.ports: [{\"name\":\"web\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796faeaf498a0cf66778ac0bfdca4997a21c3fa259a33a7e4ed26351885ee0c9,PodSandboxId:7b093825f1fe4dd66c14ac2c10e7605906deab68f3acb0588b21657d6fea391e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116423691471037,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e1fc5999-4caf-496d-a302-707570e1f019,},Annotations:map[string]string{io.kubernetes.container.hash: c60f0de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25458f0b01a86f841f0b80d9682d174573303f6a35d4a678193134bd34277761,PodSandboxId:fb0cbab3c54c59215d3f9b0027ec49fe8b34334a5cff2f1f4397a3758d14a620,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1766116420263737361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwzpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4598eae-ba53-4173-bc4a-535887dc6d10,},Annotations:map[string]string{io.kubernetes.container.hash: fffff994,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c23bdc763b01672e6cb7e934236bc23e89892665c4518cce1a92bfd3230cd50,PodSandboxId:d8561e78594775abe17c477f444bda5ccdf4feec623c89b0dd77dd900a2cf47a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116412592374221,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6361f87-812
3-46eb-8f28-110eb02b1927,},Annotations:map[string]string{io.kubernetes.container.hash: 158eaf00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba408cece6208ddeb9327287a76e8a3d4f03bcc10c4b233b10f9959864c891b5,PodSandboxId:4d2c29ff4a2ed8cef6afbbd6aae68bc1475bd43ce62d8b9ac7d89237ae86f824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a,State:CONTAINER_RUNNING,CreatedAt:1766116412573827827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4c59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d1adab-9314-4395-95f2-1ca383aefee1,},Annota
tions:map[string]string{io.kubernetes.container.hash: a49c72a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f352655401bed294bcff033ea5d2aa2a0bdf87de5fbb1206e8731b680ce582c,PodSandboxId:2ed304f66f5d1100845fc56cd8d93da79cff42aca9a0c55c230af2194535cb4c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157,State:CONTAINER_RUNNING,CreatedAt:1766116408686391475,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb606b3e2663e111185ab803de8836,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2df5c40d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6833537f6aeed3fadb0f5baf15f935306cc5b6f0468e6e56b3922330e158ee,PodSandboxId:022dd0dc606224b9ea84f65a61d92ae969d481f95b322acbe2df9b403d830644,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1766116408697700650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31afd2a7541d649fb566c092b7f15410,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 5ddae9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d28987fec36e57788da0534ff938f4677376fcea5c129e696240ae6fc0ed015,PodSandboxId:31bb13c0703b5ea7fbb41944a5204406197098847c44617acb41400f918cd15d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95,State:CONTAINER_RUNNING,CreatedAt:1766116408617770483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b78f15083684b811afd94e8a9518e12,},Annotations:map[string]string{io.kubernetes.container.hash:
404b5e1f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00fed501023e3502b030b4bb79ef632a05bce26dfa057b799045368d55f79e28,PodSandboxId:3cf899f74093f994cd317a64f629e1e64bf7e1d45d395ad5b5d6acfedd333e46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62,State:CONTAINER_RUNNING,CreatedAt:1766116408638461352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5474cd52daad06c999fd54796fd62f6,},Annotations:map[string]string{io.kubernet
es.container.hash: 78dfeacb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6276129a-ca34-4ca9-a018-6dac4bf67791 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.891190058Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cfd84e47-e6bb-4ffa-8208-9fa83825a8c1 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.891277865Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cfd84e47-e6bb-4ffa-8208-9fa83825a8c1 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.892980759Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cb01f3bc-3884-46d7-b816-ae28b38aa0b8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.893918724Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766116976893831330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:198516,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb01f3bc-3884-46d7-b816-ae28b38aa0b8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.895004223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54ad7ffd-6dde-4be0-8115-79f6c4d7f739 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.895098353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54ad7ffd-6dde-4be0-8115-79f6c4d7f739 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:02:56 old-k8s-version-094166 crio[884]: time="2025-12-19 04:02:56.895370029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ba53a084d3418609c316a45fc2fb31ecffa561d929d3c31c11897926152f7f7,PodSandboxId:1d41e1adeda344c2b6bf312ecd196cb3f294b1dd0940b74440188e728e4f8236,Metadata:&ContainerMetadata{Name:proxy,Attempt:0,},Image:&ImageSpec{Image:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_RUNNING,CreatedAt:1766116455714758363,Labels:map[string]string{io.kubernetes.container.name: proxy,io.kubernetes.pod.name: kubernetes-dashboard-kong-f487b85cd-6h64p,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ce63b698-0626-4cdb-9035-f9f905d770cf,},Annotations:map[string]string{io.kubernetes.container.hash: f625957b,io.kubernetes.container.ports: [{\"name\":\"proxy-
tls\",\"containerPort\":8443,\"protocol\":\"TCP\"},{\"name\":\"status\",\"containerPort\":8100,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"kong\",\"quit\",\"--wait=15\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b3a5531151deeda30e8690a92f8af8076432b199545941605702c1654d71ca,PodSandboxId:1d41e1adeda344c2b6bf312ecd196cb3f294b1dd0940b74440188e728e4f8236,Metadata:&ContainerMetadata{Name:clear-stale-pid,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_EXITED,CreatedAt:1766116454700263443,Labels:map[string]string{io.kubernetes.container.name: clear-
stale-pid,io.kubernetes.pod.name: kubernetes-dashboard-kong-f487b85cd-6h64p,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ce63b698-0626-4cdb-9035-f9f905d770cf,},Annotations:map[string]string{io.kubernetes.container.hash: fc5d9912,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d52d8be50dac733ff4ec86c05667c686a18577fa43f59488bb796d1a5f17cc,PodSandboxId:d8561e78594775abe17c477f444bda5ccdf4feec623c89b0dd77dd900a2cf47a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116443517547989,Labels:map[string]string{io.kubernetes.contain
er.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6361f87-8123-46eb-8f28-110eb02b1927,},Annotations:map[string]string{io.kubernetes.container.hash: 158eaf00,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278a539192a9e7c904fb20462e7177dd70e833a4bcfc50bfda225435b77cdac4,PodSandboxId:ac4f5da2781b4b44297b4d7adecf74ff7c3ad33beaee16395c5e4882e53eed9f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-api,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0607af4fcd8ae78708c5bd51f34ccf8442967b8e41f56f008cc2884690f2f3b,State:CONTAINER_RUNNING,CreatedAt:1766116442753753567,Labels:ma
p[string]string{io.kubernetes.container.name: kubernetes-dashboard-api,io.kubernetes.pod.name: kubernetes-dashboard-api-56d75ddbb-tppfn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 185d3f92-b7c0-43d3-a6f5-b9d52d8d41c0,},Annotations:map[string]string{io.kubernetes.container.hash: 547c95c6,io.kubernetes.container.ports: [{\"name\":\"api\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c52846cd4715f53aa4a5f474e43006e8c5483d90b1849df8dc0075dff1b6083c,PodSandboxId:88153576fecac10bb0a65a4a0730fde12e8880a90d8a049f71fef56dd129ead9,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-auth,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1,State:CONTAINER_RUNNING,CreatedAt:1766116439276415330,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-auth,io.kubernetes.pod.name: kubernetes-dashboard-auth-84ff87fdd5-zd9bz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1c55220d-544f-4947-b5c5-afdb85361029,},Annotations:map[string]string{io.kubernetes.container.hash: e1c595d7,io.kubernetes.container.ports: [{\"name\":\"auth\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633226fb3e30d41ed72df6ade31838a480d47853a5dd02fecf2e176fc3b823b6,PodSandboxId:f31a88a220665e883c5e31493836838da73f09f4c18d8866a75898e7c9c7feb4,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-metrics-scrap
er,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d9cbc9f4053ca11fa6edb814b512ab807a89346934700a3a425fc1e24a3f2167,State:CONTAINER_RUNNING,CreatedAt:1766116436190115082,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-metrics-scraper,io.kubernetes.pod.name: kubernetes-dashboard-metrics-scraper-6b5c7dc479-5rsmg,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 43deb4b4-c100-4378-917c-1eefb131c216,},Annotations:map[string]string{io.kubernetes.container.hash: 33f81994,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1bca547a
8cb13056ce6f41cd3d2827332c49678081ed4fdf1e5f07e548e8e9,PodSandboxId:5896d1e86f1716bc991865708d89f4e07225e965a5eae42250327484c19431a8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-web,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06,State:CONTAINER_RUNNING,CreatedAt:1766116432981249997,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-web,io.kubernetes.pod.name: kubernetes-dashboard-web-858bd7466-c5kzr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 5ae4b527-2bef-4144-a371-dc2f4116a3c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1b2182b9,io.kubernetes.container.ports: [{\"name\":\"web\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796faeaf498a0cf66778ac0bfdca4997a21c3fa259a33a7e4ed26351885ee0c9,PodSandboxId:7b093825f1fe4dd66c14ac2c10e7605906deab68f3acb0588b21657d6fea391e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116423691471037,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e1fc5999-4caf-496d-a302-707570e1f019,},Annotations:map[string]string{io.kubernetes.container.hash: c60f0de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25458f0b01a86f841f0b80d9682d174573303f6a35d4a678193134bd34277761,PodSandboxId:fb0cbab3c54c59215d3f9b0027ec49fe8b34334a5cff2f1f4397a3758d14a620,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1766116420263737361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwzpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4598eae-ba53-4173-bc4a-535887dc6d10,},Annotations:map[string]string{io.kubernetes.container.hash: fffff994,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c23bdc763b01672e6cb7e934236bc23e89892665c4518cce1a92bfd3230cd50,PodSandboxId:d8561e78594775abe17c477f444bda5ccdf4feec623c89b0dd77dd900a2cf47a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116412592374221,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6361f87-812
3-46eb-8f28-110eb02b1927,},Annotations:map[string]string{io.kubernetes.container.hash: 158eaf00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba408cece6208ddeb9327287a76e8a3d4f03bcc10c4b233b10f9959864c891b5,PodSandboxId:4d2c29ff4a2ed8cef6afbbd6aae68bc1475bd43ce62d8b9ac7d89237ae86f824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a,State:CONTAINER_RUNNING,CreatedAt:1766116412573827827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4c59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d1adab-9314-4395-95f2-1ca383aefee1,},Annota
tions:map[string]string{io.kubernetes.container.hash: a49c72a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f352655401bed294bcff033ea5d2aa2a0bdf87de5fbb1206e8731b680ce582c,PodSandboxId:2ed304f66f5d1100845fc56cd8d93da79cff42aca9a0c55c230af2194535cb4c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157,State:CONTAINER_RUNNING,CreatedAt:1766116408686391475,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb606b3e2663e111185ab803de8836,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2df5c40d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6833537f6aeed3fadb0f5baf15f935306cc5b6f0468e6e56b3922330e158ee,PodSandboxId:022dd0dc606224b9ea84f65a61d92ae969d481f95b322acbe2df9b403d830644,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1766116408697700650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31afd2a7541d649fb566c092b7f15410,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 5ddae9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d28987fec36e57788da0534ff938f4677376fcea5c129e696240ae6fc0ed015,PodSandboxId:31bb13c0703b5ea7fbb41944a5204406197098847c44617acb41400f918cd15d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95,State:CONTAINER_RUNNING,CreatedAt:1766116408617770483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b78f15083684b811afd94e8a9518e12,},Annotations:map[string]string{io.kubernetes.container.hash:
404b5e1f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00fed501023e3502b030b4bb79ef632a05bce26dfa057b799045368d55f79e28,PodSandboxId:3cf899f74093f994cd317a64f629e1e64bf7e1d45d395ad5b5d6acfedd333e46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62,State:CONTAINER_RUNNING,CreatedAt:1766116408638461352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5474cd52daad06c999fd54796fd62f6,},Annotations:map[string]string{io.kubernet
es.container.hash: 78dfeacb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54ad7ffd-6dde-4be0-8115-79f6c4d7f739 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	4ba53a084d341       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           8 minutes ago       Running             proxy                                  0                   1d41e1adeda34       kubernetes-dashboard-kong-f487b85cd-6h64p               kubernetes-dashboard
	29b3a5531151d       docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29                             8 minutes ago       Exited              clear-stale-pid                        0                   1d41e1adeda34       kubernetes-dashboard-kong-f487b85cd-6h64p               kubernetes-dashboard
	30d52d8be50da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           8 minutes ago       Running             storage-provisioner                    3                   d8561e7859477       storage-provisioner                                     kube-system
	278a539192a9e       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               8 minutes ago       Running             kubernetes-dashboard-api               0                   ac4f5da2781b4       kubernetes-dashboard-api-56d75ddbb-tppfn                kubernetes-dashboard
	c52846cd4715f       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              8 minutes ago       Running             kubernetes-dashboard-auth              0                   88153576fecac       kubernetes-dashboard-auth-84ff87fdd5-zd9bz              kubernetes-dashboard
	633226fb3e30d       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   9 minutes ago       Running             kubernetes-dashboard-metrics-scraper   0                   f31a88a220665       kubernetes-dashboard-metrics-scraper-6b5c7dc479-5rsmg   kubernetes-dashboard
	6d1bca547a8cb       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               9 minutes ago       Running             kubernetes-dashboard-web               0                   5896d1e86f171       kubernetes-dashboard-web-858bd7466-c5kzr                kubernetes-dashboard
	796faeaf498a0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                        9 minutes ago       Running             busybox                                1                   7b093825f1fe4       busybox                                                 default
	25458f0b01a86       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                           9 minutes ago       Running             coredns                                1                   fb0cbab3c54c5       coredns-5dd5756b68-jwzpn                                kube-system
	9c23bdc763b01       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           9 minutes ago       Exited              storage-provisioner                    2                   d8561e7859477       storage-provisioner                                     kube-system
	ba408cece6208       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                                           9 minutes ago       Running             kube-proxy                             1                   4d2c29ff4a2ed       kube-proxy-k4c59                                        kube-system
	cf6833537f6ae       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                           9 minutes ago       Running             etcd                                   1                   022dd0dc60622       etcd-old-k8s-version-094166                             kube-system
	9f352655401be       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                                           9 minutes ago       Running             kube-scheduler                         1                   2ed304f66f5d1       kube-scheduler-old-k8s-version-094166                   kube-system
	00fed501023e3       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                                           9 minutes ago       Running             kube-controller-manager                1                   3cf899f74093f       kube-controller-manager-old-k8s-version-094166          kube-system
	0d28987fec36e       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                                           9 minutes ago       Running             kube-apiserver                         1                   31bb13c0703b5       kube-apiserver-old-k8s-version-094166                   kube-system
	
	
	==> coredns [25458f0b01a86f841f0b80d9682d174573303f6a35d4a678193134bd34277761] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46353 - 10313 "HINFO IN 7031563663414278408.8956184294594618866. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01812941s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-094166
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-094166
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=old-k8s-version-094166
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_50_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:50:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-094166
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 04:02:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:59:40 +0000   Fri, 19 Dec 2025 03:50:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:59:40 +0000   Fri, 19 Dec 2025 03:50:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:59:40 +0000   Fri, 19 Dec 2025 03:50:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:59:40 +0000   Fri, 19 Dec 2025 03:53:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.65
	  Hostname:    old-k8s-version-094166
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 fcbb2c892246481388d54e88e69ff22c
	  System UUID:                fcbb2c89-2246-4813-88d5-4e88e69ff22c
	  Boot ID:                    05d4f12e-d326-4afb-9bcb-c16595fd1b4a
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-5dd5756b68-jwzpn                                 100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     12m
	  kube-system                 etcd-old-k8s-version-094166                              100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         12m
	  kube-system                 kube-apiserver-old-k8s-version-094166                    250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-old-k8s-version-094166           200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-k4c59                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-old-k8s-version-094166                    100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-57f55c9bc5-9sqkf                          100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         11m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        kubernetes-dashboard-api-56d75ddbb-tppfn                 100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m13s
	  kubernetes-dashboard        kubernetes-dashboard-auth-84ff87fdd5-zd9bz               100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m13s
	  kubernetes-dashboard        kubernetes-dashboard-kong-f487b85cd-6h64p                0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-6b5c7dc479-5rsmg    100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m13s
	  kubernetes-dashboard        kubernetes-dashboard-web-858bd7466-c5kzr                 100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1250m (62%)   1 (50%)
	  memory             1170Mi (39%)  1770Mi (59%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 9m24s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                    kubelet          Node old-k8s-version-094166 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                    kubelet          Node old-k8s-version-094166 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                    kubelet          Node old-k8s-version-094166 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                    kubelet          Starting kubelet.
	  Normal  NodeReady                12m                    kubelet          Node old-k8s-version-094166 status is now: NodeReady
	  Normal  RegisteredNode           12m                    node-controller  Node old-k8s-version-094166 event: Registered Node old-k8s-version-094166 in Controller
	  Normal  Starting                 9m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m31s (x8 over 9m31s)  kubelet          Node old-k8s-version-094166 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m31s (x8 over 9m31s)  kubelet          Node old-k8s-version-094166 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m31s (x7 over 9m31s)  kubelet          Node old-k8s-version-094166 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m13s                  node-controller  Node old-k8s-version-094166 event: Registered Node old-k8s-version-094166 in Controller
	
	
	==> dmesg <==
	[Dec19 03:53] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000050] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003995] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.901908] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.128466] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.099578] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.498903] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.000048] kauditd_printk_skb: 122 callbacks suppressed
	[  +3.653682] kauditd_printk_skb: 143 callbacks suppressed
	[  +6.164427] kauditd_printk_skb: 204 callbacks suppressed
	[  +6.279079] kauditd_printk_skb: 32 callbacks suppressed
	[Dec19 03:54] kauditd_printk_skb: 47 callbacks suppressed
	[ +11.943764] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [cf6833537f6aeed3fadb0f5baf15f935306cc5b6f0468e6e56b3922330e158ee] <==
	{"level":"warn","ts":"2025-12-19T03:54:14.481232Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.319611355s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.65\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-12-19T03:54:14.481249Z","caller":"traceutil/trace.go:171","msg":"trace[175171455] range","detail":"{range_begin:/registry/masterleases/192.168.61.65; range_end:; response_count:1; response_revision:792; }","duration":"1.319630182s","start":"2025-12-19T03:54:13.161614Z","end":"2025-12-19T03:54:14.481244Z","steps":["trace[175171455] 'agreement among raft nodes before linearized reading'  (duration: 1.31958912s)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:14.481269Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-19T03:54:13.161597Z","time spent":"1.319667677s","remote":"127.0.0.1:33612","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":156,"request content":"key:\"/registry/masterleases/192.168.61.65\" "}
	{"level":"warn","ts":"2025-12-19T03:54:14.481516Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"736.436776ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:5 size:31497"}
	{"level":"info","ts":"2025-12-19T03:54:14.481553Z","caller":"traceutil/trace.go:171","msg":"trace[1018056850] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:5; response_revision:792; }","duration":"736.478183ms","start":"2025-12-19T03:54:13.745065Z","end":"2025-12-19T03:54:14.481543Z","steps":["trace[1018056850] 'agreement among raft nodes before linearized reading'  (duration: 736.316943ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:14.481581Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-19T03:54:13.745048Z","time spent":"736.527224ms","remote":"127.0.0.1:33780","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":5,"response size":31520,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2025-12-19T03:54:14.481997Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"646.838108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:54:14.482378Z","caller":"traceutil/trace.go:171","msg":"trace[835898818] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:792; }","duration":"647.220797ms","start":"2025-12-19T03:54:13.835147Z","end":"2025-12-19T03:54:14.482368Z","steps":["trace[835898818] 'agreement among raft nodes before linearized reading'  (duration: 646.816574ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:14.482418Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-19T03:54:13.835136Z","time spent":"647.271845ms","remote":"127.0.0.1:33578","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-12-19T03:54:17.012693Z","caller":"traceutil/trace.go:171","msg":"trace[919623372] linearizableReadLoop","detail":"{readStateIndex:858; appliedIndex:857; }","duration":"267.888131ms","start":"2025-12-19T03:54:16.744792Z","end":"2025-12-19T03:54:17.01268Z","steps":["trace[919623372] 'read index received'  (duration: 267.75147ms)","trace[919623372] 'applied index is now lower than readState.Index'  (duration: 136.325µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:54:17.012827Z","caller":"traceutil/trace.go:171","msg":"trace[1409116822] transaction","detail":"{read_only:false; response_revision:805; number_of_response:1; }","duration":"301.760136ms","start":"2025-12-19T03:54:16.711061Z","end":"2025-12-19T03:54:17.012821Z","steps":["trace[1409116822] 'process raft request'  (duration: 301.52474ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:17.013026Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.760958ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:54:17.013095Z","caller":"traceutil/trace.go:171","msg":"trace[1626339163] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:805; }","duration":"175.839162ms","start":"2025-12-19T03:54:16.837244Z","end":"2025-12-19T03:54:17.013083Z","steps":["trace[1626339163] 'agreement among raft nodes before linearized reading'  (duration: 175.741891ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:17.013048Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-19T03:54:16.71104Z","time spent":"301.914135ms","remote":"127.0.0.1:33780","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":13227,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-6h64p\" mod_revision:800 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-6h64p\" value_size:13142 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-6h64p\" > >"}
	{"level":"warn","ts":"2025-12-19T03:54:17.013315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"268.534942ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:5 size:31904"}
	{"level":"info","ts":"2025-12-19T03:54:17.013475Z","caller":"traceutil/trace.go:171","msg":"trace[412993667] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:5; response_revision:805; }","duration":"268.697504ms","start":"2025-12-19T03:54:16.744769Z","end":"2025-12-19T03:54:17.013466Z","steps":["trace[412993667] 'agreement among raft nodes before linearized reading'  (duration: 268.414353ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:54:45.374171Z","caller":"traceutil/trace.go:171","msg":"trace[22211896] transaction","detail":"{read_only:false; response_revision:840; number_of_response:1; }","duration":"120.475084ms","start":"2025-12-19T03:54:45.253682Z","end":"2025-12-19T03:54:45.374157Z","steps":["trace[22211896] 'process raft request'  (duration: 120.380289ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:54:46.10744Z","caller":"traceutil/trace.go:171","msg":"trace[184689208] linearizableReadLoop","detail":"{readStateIndex:901; appliedIndex:900; }","duration":"361.132161ms","start":"2025-12-19T03:54:45.746295Z","end":"2025-12-19T03:54:46.107427Z","steps":["trace[184689208] 'read index received'  (duration: 360.991851ms)","trace[184689208] 'applied index is now lower than readState.Index'  (duration: 139.846µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:54:46.107803Z","caller":"traceutil/trace.go:171","msg":"trace[122968497] transaction","detail":"{read_only:false; response_revision:841; number_of_response:1; }","duration":"619.87802ms","start":"2025-12-19T03:54:45.487915Z","end":"2025-12-19T03:54:46.107793Z","steps":["trace[122968497] 'process raft request'  (duration: 619.413842ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:46.107793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.644735ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-19T03:54:46.108034Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"361.759533ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:5 size:31730"}
	{"level":"info","ts":"2025-12-19T03:54:46.108Z","caller":"traceutil/trace.go:171","msg":"trace[735910079] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:841; }","duration":"272.86216ms","start":"2025-12-19T03:54:45.835127Z","end":"2025-12-19T03:54:46.107989Z","steps":["trace[735910079] 'agreement among raft nodes before linearized reading'  (duration: 272.588085ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:54:46.108075Z","caller":"traceutil/trace.go:171","msg":"trace[1979747885] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:5; response_revision:841; }","duration":"361.798659ms","start":"2025-12-19T03:54:45.746269Z","end":"2025-12-19T03:54:46.108068Z","steps":["trace[1979747885] 'agreement among raft nodes before linearized reading'  (duration: 361.717001ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:46.108101Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-19T03:54:45.746227Z","time spent":"361.868953ms","remote":"127.0.0.1:33780","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":5,"response size":31753,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2025-12-19T03:54:46.107967Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-19T03:54:45.487898Z","time spent":"620.007152ms","remote":"127.0.0.1:33874","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":687,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-osq6jgaiw7qwbygbc3dlqorewy\" mod_revision:829 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-osq6jgaiw7qwbygbc3dlqorewy\" value_size:614 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-osq6jgaiw7qwbygbc3dlqorewy\" > >"}
	
	
	==> kernel <==
	 04:02:57 up 9 min,  0 users,  load average: 0.08, 0.20, 0.17
	Linux old-k8s-version-094166 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0d28987fec36e57788da0534ff938f4677376fcea5c129e696240ae6fc0ed015] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:58:32.983818       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 03:58:32.985116       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 03:59:31.847603       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.104.239:443: connect: connection refused
	I1219 03:59:31.847672       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:59:32.984519       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:59:32.984575       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1219 03:59:32.984586       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:59:32.986006       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:59:32.986101       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:59:32.986118       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 04:00:31.847154       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.104.239:443: connect: connection refused
	I1219 04:00:31.847181       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1219 04:01:31.847721       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.104.239:443: connect: connection refused
	I1219 04:01:31.847792       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 04:01:32.985318       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 04:01:32.985409       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1219 04:01:32.985426       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:01:32.986509       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 04:01:32.986659       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 04:01:32.986669       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 04:02:31.846985       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.104.239:443: connect: connection refused
	I1219 04:02:31.847003       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [00fed501023e3502b030b4bb79ef632a05bce26dfa057b799045368d55f79e28] <==
	I1219 03:57:16.202228       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="143.006µs"
	E1219 03:57:44.467401       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:57:45.071614       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:58:14.474080       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:58:15.080808       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:58:44.480069       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:58:45.091569       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:59:14.485999       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:59:15.101080       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:59:44.493748       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:59:45.108996       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1219 03:59:52.206902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="243.071µs"
	I1219 04:00:04.200455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="109.154µs"
	E1219 04:00:14.501305       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:00:15.117681       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 04:00:44.506740       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:00:45.125560       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 04:01:14.513236       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:01:15.135199       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 04:01:44.520597       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:01:45.144657       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 04:02:14.525821       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:02:15.154632       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 04:02:44.532238       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:02:45.163763       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ba408cece6208ddeb9327287a76e8a3d4f03bcc10c4b233b10f9959864c891b5] <==
	I1219 03:53:32.731023       1 server_others.go:69] "Using iptables proxy"
	I1219 03:53:32.741933       1 node.go:141] Successfully retrieved node IP: 192.168.61.65
	I1219 03:53:32.784509       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1219 03:53:32.784528       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:53:32.787789       1 server_others.go:152] "Using iptables Proxier"
	I1219 03:53:32.787930       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1219 03:53:32.788159       1 server.go:846] "Version info" version="v1.28.0"
	I1219 03:53:32.788382       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:53:32.789313       1 config.go:188] "Starting service config controller"
	I1219 03:53:32.789389       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1219 03:53:32.789433       1 config.go:97] "Starting endpoint slice config controller"
	I1219 03:53:32.789457       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1219 03:53:32.791714       1 config.go:315] "Starting node config controller"
	I1219 03:53:32.791765       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1219 03:53:32.890302       1 shared_informer.go:318] Caches are synced for service config
	I1219 03:53:32.890814       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1219 03:53:32.892144       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [9f352655401bed294bcff033ea5d2aa2a0bdf87de5fbb1206e8731b680ce582c] <==
	I1219 03:53:29.765992       1 serving.go:348] Generated self-signed cert in-memory
	W1219 03:53:31.922660       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:53:31.922706       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:53:31.922721       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:53:31.922727       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:53:31.973790       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1219 03:53:31.973940       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:53:31.983544       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1219 03:53:31.984064       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:53:31.992629       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1219 03:53:31.984083       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1219 03:53:32.093025       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 19 04:00:26 old-k8s-version-094166 kubelet[1229]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 19 04:00:26 old-k8s-version-094166 kubelet[1229]:         ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 19 04:00:26 old-k8s-version-094166 kubelet[1229]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 19 04:00:26 old-k8s-version-094166 kubelet[1229]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 19 04:00:30 old-k8s-version-094166 kubelet[1229]: E1219 04:00:30.183368    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:00:44 old-k8s-version-094166 kubelet[1229]: E1219 04:00:44.183492    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:00:59 old-k8s-version-094166 kubelet[1229]: E1219 04:00:59.183377    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:01:14 old-k8s-version-094166 kubelet[1229]: E1219 04:01:14.183297    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:01:26 old-k8s-version-094166 kubelet[1229]: E1219 04:01:26.206517    1229 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 19 04:01:26 old-k8s-version-094166 kubelet[1229]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 19 04:01:26 old-k8s-version-094166 kubelet[1229]:         ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 19 04:01:26 old-k8s-version-094166 kubelet[1229]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 19 04:01:26 old-k8s-version-094166 kubelet[1229]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 19 04:01:28 old-k8s-version-094166 kubelet[1229]: E1219 04:01:28.188165    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:01:43 old-k8s-version-094166 kubelet[1229]: E1219 04:01:43.183441    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:01:55 old-k8s-version-094166 kubelet[1229]: E1219 04:01:55.183183    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:02:09 old-k8s-version-094166 kubelet[1229]: E1219 04:02:09.183273    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:02:24 old-k8s-version-094166 kubelet[1229]: E1219 04:02:24.183113    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:02:26 old-k8s-version-094166 kubelet[1229]: E1219 04:02:26.207831    1229 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 19 04:02:26 old-k8s-version-094166 kubelet[1229]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 19 04:02:26 old-k8s-version-094166 kubelet[1229]:         ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 19 04:02:26 old-k8s-version-094166 kubelet[1229]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 19 04:02:26 old-k8s-version-094166 kubelet[1229]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 19 04:02:35 old-k8s-version-094166 kubelet[1229]: E1219 04:02:35.182822    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:02:49 old-k8s-version-094166 kubelet[1229]: E1219 04:02:49.183725    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	
	
	==> kubernetes-dashboard [278a539192a9e7c904fb20462e7177dd70e833a4bcfc50bfda225435b77cdac4] <==
	I1219 03:54:03.051217       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:54:03.051322       1 init.go:49] Using in-cluster config
	I1219 03:54:03.051767       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:54:03.051800       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:54:03.051939       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:54:03.051977       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:54:03.059810       1 main.go:119] "Successful initial request to the apiserver" version="v1.28.0"
	I1219 03:54:03.059967       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:54:03.070983       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	I1219 03:54:03.150776       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [633226fb3e30d41ed72df6ade31838a480d47853a5dd02fecf2e176fc3b823b6] <==
	E1219 04:00:56.496513       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 04:01:56.501404       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 04:02:56.496603       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	10.244.0.1 - - [19/Dec/2025:04:00:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:00:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:00:33 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:04:00:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:00:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:00:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:01:03 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:04:01:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:01:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:01:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:01:33 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:04:01:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:01:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:01:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:02:03 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:04:02:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:02:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:02:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:02:33 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:04:02:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:02:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:02:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	
	
	==> kubernetes-dashboard [6d1bca547a8cb13056ce6f41cd3d2827332c49678081ed4fdf1e5f07e548e8e9] <==
	I1219 03:53:53.296986       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:53:53.297089       1 init.go:48] Using in-cluster config
	I1219 03:53:53.297478       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [c52846cd4715f53aa4a5f474e43006e8c5483d90b1849df8dc0075dff1b6083c] <==
	I1219 03:53:59.504360       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:53:59.504522       1 init.go:49] Using in-cluster config
	I1219 03:53:59.504672       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [30d52d8be50dac733ff4ec86c05667c686a18577fa43f59488bb796d1a5f17cc] <==
	I1219 03:54:03.682828       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 03:54:03.697654       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 03:54:03.698060       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1219 03:54:21.116001       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 03:54:21.119286       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"151e0a76-60e8-47dd-a88b-79e45b0cb6e8", APIVersion:"v1", ResourceVersion:"806", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-094166_90084209-5341-4d92-95a3-fa64f6c8361b became leader
	I1219 03:54:21.119469       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-094166_90084209-5341-4d92-95a3-fa64f6c8361b!
	I1219 03:54:21.221102       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-094166_90084209-5341-4d92-95a3-fa64f6c8361b!
	
	
	==> storage-provisioner [9c23bdc763b01672e6cb7e934236bc23e89892665c4518cce1a92bfd3230cd50] <==
	I1219 03:53:32.699182       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:54:02.702442       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094166 -n old-k8s-version-094166
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-094166 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-57f55c9bc5-9sqkf
helpers_test.go:283: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context old-k8s-version-094166 describe pod metrics-server-57f55c9bc5-9sqkf
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context old-k8s-version-094166 describe pod metrics-server-57f55c9bc5-9sqkf: exit status 1 (65.258399ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9sqkf" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context old-k8s-version-094166 describe pod metrics-server-57f55c9bc5-9sqkf: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 04:00:24.934560    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-298059 -n no-preload-298059
start_stop_delete_test.go:272: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-12-19 04:09:03.202548791 +0000 UTC m=+6246.240728103
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298059 -n no-preload-298059
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-298059 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-298059 logs -n 25: (1.3137123s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ ssh     │ -p bridge-542624 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo containerd config dump                                                                                                                                                                                                │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo crio config                                                                                                                                                                                                           │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ delete  │ -p bridge-542624                                                                                                                                                                                                                            │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ delete  │ -p disable-driver-mounts-189846                                                                                                                                                                                                             │ disable-driver-mounts-189846 │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ start   │ -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-094166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ stop    │ -p old-k8s-version-094166 --alsologtostderr -v=3                                                                                                                                                                                            │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p no-preload-298059 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ stop    │ -p no-preload-298059 --alsologtostderr -v=3                                                                                                                                                                                                 │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p embed-certs-244717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                    │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:52 UTC │
	│ stop    │ -p embed-certs-244717 --alsologtostderr -v=3                                                                                                                                                                                                │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-168174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                          │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:52 UTC │
	│ stop    │ -p default-k8s-diff-port-168174 --alsologtostderr -v=3                                                                                                                                                                                      │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-094166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                           │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p old-k8s-version-094166 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable dashboard -p no-preload-298059 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p no-preload-298059 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-244717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p embed-certs-244717 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-168174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                     │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ start   │ -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 04:01 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:54:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:54:19.163618   56230 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:54:19.163755   56230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:54:19.163766   56230 out.go:374] Setting ErrFile to fd 2...
	I1219 03:54:19.163773   56230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:54:19.164086   56230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:54:19.164710   56230 out.go:368] Setting JSON to false
	I1219 03:54:19.166058   56230 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5803,"bootTime":1766110656,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:54:19.166138   56230 start.go:143] virtualization: kvm guest
	I1219 03:54:19.167819   56230 out.go:179] * [default-k8s-diff-port-168174] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:54:19.168806   56230 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:54:19.168798   56230 notify.go:221] Checking for updates...
	I1219 03:54:19.170649   56230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:54:19.171718   56230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:19.172800   56230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 03:54:19.173680   56230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:54:19.174607   56230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:54:19.176155   56230 config.go:182] Loaded profile config "default-k8s-diff-port-168174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:19.176843   56230 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:54:19.221795   56230 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 03:54:19.222673   56230 start.go:309] selected driver: kvm2
	I1219 03:54:19.222686   56230 start.go:928] validating driver "kvm2" against &{Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:19.222787   56230 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:54:19.223700   56230 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:54:19.223731   56230 cni.go:84] Creating CNI manager for ""
	I1219 03:54:19.223785   56230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:19.223821   56230 start.go:353] cluster config:
	{Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:19.223901   56230 iso.go:125] acquiring lock: {Name:mk42290af04a74a4dddf27fa33aac85c9bdccfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:54:19.225058   56230 out.go:179] * Starting "default-k8s-diff-port-168174" primary control-plane node in "default-k8s-diff-port-168174" cluster
	I1219 03:54:19.225891   56230 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:54:19.225925   56230 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:54:19.225937   56230 cache.go:65] Caching tarball of preloaded images
	I1219 03:54:19.226014   56230 preload.go:238] Found /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:54:19.226025   56230 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:54:19.226103   56230 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/config.json ...
	I1219 03:54:19.226379   56230 start.go:360] acquireMachinesLock for default-k8s-diff-port-168174: {Name:mk229398d29442d4d52885bbac963e5004bfbfba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:54:19.226434   56230 start.go:364] duration metric: took 34.138µs to acquireMachinesLock for "default-k8s-diff-port-168174"
	I1219 03:54:19.226446   56230 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:54:19.226451   56230 fix.go:54] fixHost starting: 
	I1219 03:54:19.228163   56230 fix.go:112] recreateIfNeeded on default-k8s-diff-port-168174: state=Stopped err=<nil>
	W1219 03:54:19.228180   56230 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:54:16.533332   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:17.359209   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:17.532886   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:18.033640   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:18.533499   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:19.033373   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:19.533624   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:20.033318   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:20.532932   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:21.032204   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:18.384127   55957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:18.420807   55957 api_server.go:72] duration metric: took 1.537508247s to wait for apiserver process to appear ...
	I1219 03:54:18.420840   55957 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:18.420862   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.071318   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:21.071349   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:21.071368   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.151121   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:21.151151   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:21.421632   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.426745   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:21.426773   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:21.921398   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.927340   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:21.927368   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:22.420988   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:22.428236   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 200:
	ok
	I1219 03:54:22.439161   55957 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:22.439190   55957 api_server.go:131] duration metric: took 4.018341977s to wait for apiserver health ...
	I1219 03:54:22.439202   55957 cni.go:84] Creating CNI manager for ""
	I1219 03:54:22.439211   55957 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:22.440712   55957 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:54:22.442679   55957 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:54:22.464908   55957 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:54:22.524765   55957 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:22.531030   55957 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:22.531082   55957 system_pods.go:61] "coredns-66bc5c9577-9ptrv" [22226444-faa6-420d-a862-1ef0441a80e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:22.531096   55957 system_pods.go:61] "etcd-embed-certs-244717" [24fc3b7f-cbce-4591-9d18-9859136e38c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:22.531109   55957 system_pods.go:61] "kube-apiserver-embed-certs-244717" [67d6bc2f-88f4-449a-a029-dc66d6ad1468] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:22.531117   55957 system_pods.go:61] "kube-controller-manager-embed-certs-244717" [f2be3fb7-4440-43a3-96ec-b2b241393a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:22.531126   55957 system_pods.go:61] "kube-proxy-p8gvm" [283607b2-9e6c-44f4-9c9d-7d713c71fb8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:22.531135   55957 system_pods.go:61] "kube-scheduler-embed-certs-244717" [243ff9cd-448c-4e21-98aa-032de9f329fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:22.531151   55957 system_pods.go:61] "metrics-server-746fcd58dc-x74d4" [e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:22.531159   55957 system_pods.go:61] "storage-provisioner" [99ff9c60-2f30-457a-8cb5-e030eb64a58e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:54:22.531169   55957 system_pods.go:74] duration metric: took 6.378453ms to wait for pod list to return data ...
	I1219 03:54:22.531184   55957 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:22.538334   55957 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:22.538361   55957 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:22.538378   55957 node_conditions.go:105] duration metric: took 7.188571ms to run NodePressure ...
	I1219 03:54:22.538434   55957 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:22.838171   55957 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1219 03:54:22.841979   55957 kubeadm.go:744] kubelet initialised
	I1219 03:54:22.842009   55957 kubeadm.go:745] duration metric: took 3.812738ms waiting for restarted kubelet to initialise ...
	I1219 03:54:22.842027   55957 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:54:22.858280   55957 ops.go:34] apiserver oom_adj: -16
	I1219 03:54:22.858296   55957 kubeadm.go:602] duration metric: took 8.274282939s to restartPrimaryControlPlane
	I1219 03:54:22.858304   55957 kubeadm.go:403] duration metric: took 8.332738451s to StartCluster
	I1219 03:54:22.858319   55957 settings.go:142] acquiring lock: {Name:mkb131693a40b9aa50c302192272518c3561c861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:22.858398   55957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:22.860091   55957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:22.860306   55957 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.54 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:54:22.860397   55957 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:54:22.860520   55957 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-244717"
	I1219 03:54:22.860540   55957 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-244717"
	W1219 03:54:22.860553   55957 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:54:22.860556   55957 addons.go:70] Setting default-storageclass=true in profile "embed-certs-244717"
	I1219 03:54:22.860588   55957 config.go:182] Loaded profile config "embed-certs-244717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:22.860638   55957 addons.go:70] Setting dashboard=true in profile "embed-certs-244717"
	I1219 03:54:22.860664   55957 addons.go:239] Setting addon dashboard=true in "embed-certs-244717"
	W1219 03:54:22.860674   55957 addons.go:248] addon dashboard should already be in state true
	I1219 03:54:22.860596   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.860698   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.860603   55957 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-244717"
	I1219 03:54:22.860613   55957 addons.go:70] Setting metrics-server=true in profile "embed-certs-244717"
	I1219 03:54:22.861202   55957 addons.go:239] Setting addon metrics-server=true in "embed-certs-244717"
	W1219 03:54:22.861219   55957 addons.go:248] addon metrics-server should already be in state true
	I1219 03:54:22.861243   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.861875   55957 out.go:179] * Verifying Kubernetes components...
	I1219 03:54:22.862820   55957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:22.863427   55957 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:22.863444   55957 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:54:22.864891   55957 addons.go:239] Setting addon default-storageclass=true in "embed-certs-244717"
	W1219 03:54:22.864914   55957 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:54:22.864935   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.866702   55957 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:22.866730   55957 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:54:22.866703   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.866913   55957 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:54:22.867359   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.867391   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.867616   55957 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:54:22.867638   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:22.868328   55957 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:22.868344   55957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:54:22.868968   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:54:22.869019   55957 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:54:22.870937   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.871717   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.871748   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.871986   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:22.872790   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873111   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873212   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.873235   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873423   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:22.873635   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.873666   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873832   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:23.104462   55957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:23.139781   55957 node_ready.go:35] waiting up to 6m0s for node "embed-certs-244717" to be "Ready" ...
	I1219 03:54:19.229464   56230 out.go:252] * Restarting existing kvm2 VM for "default-k8s-diff-port-168174" ...
	I1219 03:54:19.229501   56230 main.go:144] libmachine: starting domain...
	I1219 03:54:19.229509   56230 main.go:144] libmachine: ensuring networks are active...
	I1219 03:54:19.230233   56230 main.go:144] libmachine: Ensuring network default is active
	I1219 03:54:19.230721   56230 main.go:144] libmachine: Ensuring network mk-default-k8s-diff-port-168174 is active
	I1219 03:54:19.231248   56230 main.go:144] libmachine: getting domain XML...
	I1219 03:54:19.232369   56230 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>default-k8s-diff-port-168174</name>
	  <uuid>5503b0a8-1398-475d-b625-563c5bc2d168</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/default-k8s-diff-port-168174.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:d9:97:a2'/>
	      <source network='mk-default-k8s-diff-port-168174'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:3f:9e:c8'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 03:54:20.662520   56230 main.go:144] libmachine: waiting for domain to start...
	I1219 03:54:20.663943   56230 main.go:144] libmachine: domain is now running
	I1219 03:54:20.663969   56230 main.go:144] libmachine: waiting for IP...
	I1219 03:54:20.664770   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.665467   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has current primary IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.665481   56230 main.go:144] libmachine: found domain IP: 192.168.50.68
	I1219 03:54:20.665486   56230 main.go:144] libmachine: reserving static IP address...
	I1219 03:54:20.665943   56230 main.go:144] libmachine: found host DHCP lease matching {name: "default-k8s-diff-port-168174", mac: "52:54:00:d9:97:a2", ip: "192.168.50.68"} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:51:35 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:20.665989   56230 main.go:144] libmachine: skip adding static IP to network mk-default-k8s-diff-port-168174 - found existing host DHCP lease matching {name: "default-k8s-diff-port-168174", mac: "52:54:00:d9:97:a2", ip: "192.168.50.68"}
	I1219 03:54:20.666003   56230 main.go:144] libmachine: reserved static IP address 192.168.50.68 for domain default-k8s-diff-port-168174
	I1219 03:54:20.666019   56230 main.go:144] libmachine: waiting for SSH...
	I1219 03:54:20.666027   56230 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 03:54:20.668799   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.669225   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:51:35 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:20.669267   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.669495   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:20.669789   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:20.669805   56230 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 03:54:23.725788   56230 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.50.68:22: connect: no route to host
	I1219 03:54:21.532614   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:22.032788   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:22.532959   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:23.032773   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:23.531977   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:24.033500   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:24.532177   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:25.033441   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:25.533482   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:26.031758   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:23.198551   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:23.404667   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:23.420466   55957 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:54:23.445604   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:54:23.445631   55957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:54:23.525300   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:54:23.525326   55957 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:54:23.593759   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:23.593784   55957 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:54:23.645141   55957 node_ready.go:49] node "embed-certs-244717" is "Ready"
	I1219 03:54:23.645171   55957 node_ready.go:38] duration metric: took 505.352434ms for node "embed-certs-244717" to be "Ready" ...
	I1219 03:54:23.645183   55957 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:54:23.645241   55957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:23.652800   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:24.781529   55957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.376827148s)
	I1219 03:54:24.781591   55957 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.361072264s)
	I1219 03:54:24.781616   55957 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.136359787s)
	I1219 03:54:24.781638   55957 api_server.go:72] duration metric: took 1.9213054s to wait for apiserver process to appear ...
	I1219 03:54:24.781645   55957 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:24.781662   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:24.781671   55957 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:54:24.791019   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 200:
	ok
	I1219 03:54:24.791945   55957 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:24.791970   55957 api_server.go:131] duration metric: took 10.31791ms to wait for apiserver health ...
	I1219 03:54:24.791980   55957 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:24.795539   55957 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:24.795599   55957 system_pods.go:61] "coredns-66bc5c9577-9ptrv" [22226444-faa6-420d-a862-1ef0441a80e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:24.795612   55957 system_pods.go:61] "etcd-embed-certs-244717" [24fc3b7f-cbce-4591-9d18-9859136e38c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:24.795627   55957 system_pods.go:61] "kube-apiserver-embed-certs-244717" [67d6bc2f-88f4-449a-a029-dc66d6ad1468] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:24.795638   55957 system_pods.go:61] "kube-controller-manager-embed-certs-244717" [f2be3fb7-4440-43a3-96ec-b2b241393a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:24.795644   55957 system_pods.go:61] "kube-proxy-p8gvm" [283607b2-9e6c-44f4-9c9d-7d713c71fb8c] Running
	I1219 03:54:24.795655   55957 system_pods.go:61] "kube-scheduler-embed-certs-244717" [243ff9cd-448c-4e21-98aa-032de9f329fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:24.795666   55957 system_pods.go:61] "metrics-server-746fcd58dc-x74d4" [e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:24.795671   55957 system_pods.go:61] "storage-provisioner" [99ff9c60-2f30-457a-8cb5-e030eb64a58e] Running
	I1219 03:54:24.795683   55957 system_pods.go:74] duration metric: took 3.696303ms to wait for pod list to return data ...
	I1219 03:54:24.795694   55957 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:54:24.797860   55957 default_sa.go:45] found service account: "default"
	I1219 03:54:24.797884   55957 default_sa.go:55] duration metric: took 2.181869ms for default service account to be created ...
	I1219 03:54:24.797895   55957 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:54:24.800212   55957 system_pods.go:86] 8 kube-system pods found
	I1219 03:54:24.800242   55957 system_pods.go:89] "coredns-66bc5c9577-9ptrv" [22226444-faa6-420d-a862-1ef0441a80e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:24.800255   55957 system_pods.go:89] "etcd-embed-certs-244717" [24fc3b7f-cbce-4591-9d18-9859136e38c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:24.800267   55957 system_pods.go:89] "kube-apiserver-embed-certs-244717" [67d6bc2f-88f4-449a-a029-dc66d6ad1468] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:24.800277   55957 system_pods.go:89] "kube-controller-manager-embed-certs-244717" [f2be3fb7-4440-43a3-96ec-b2b241393a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:24.800283   55957 system_pods.go:89] "kube-proxy-p8gvm" [283607b2-9e6c-44f4-9c9d-7d713c71fb8c] Running
	I1219 03:54:24.800291   55957 system_pods.go:89] "kube-scheduler-embed-certs-244717" [243ff9cd-448c-4e21-98aa-032de9f329fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:24.800300   55957 system_pods.go:89] "metrics-server-746fcd58dc-x74d4" [e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:24.800307   55957 system_pods.go:89] "storage-provisioner" [99ff9c60-2f30-457a-8cb5-e030eb64a58e] Running
	I1219 03:54:24.800317   55957 system_pods.go:126] duration metric: took 2.415918ms to wait for k8s-apps to be running ...
	I1219 03:54:24.800326   55957 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:54:24.800389   55957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:54:24.901954   55957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.249113047s)
	I1219 03:54:24.901997   55957 addons.go:500] Verifying addon metrics-server=true in "embed-certs-244717"
	I1219 03:54:24.902043   55957 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:54:24.902053   55957 system_svc.go:56] duration metric: took 101.72157ms WaitForService to wait for kubelet
	I1219 03:54:24.902083   55957 kubeadm.go:587] duration metric: took 2.041739112s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:54:24.902106   55957 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:24.912597   55957 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:24.912623   55957 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:24.912638   55957 node_conditions.go:105] duration metric: took 10.525951ms to run NodePressure ...
	I1219 03:54:24.912652   55957 start.go:242] waiting for startup goroutines ...
	I1219 03:54:25.801998   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:54:29.507152   55957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.70510669s)
	I1219 03:54:29.507259   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:29.992247   55957 addons.go:500] Verifying addon dashboard=true in "embed-certs-244717"
	I1219 03:54:29.995517   55957 out.go:179] * Verifying dashboard addon...
	I1219 03:54:26.531479   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:27.031454   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:27.532215   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:28.032964   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:28.532268   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.032253   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.533154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:30.032379   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:30.532853   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:31.032643   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.998065   55957 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:54:30.003541   55957 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:54:30.003561   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:30.510371   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:31.003319   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:31.502854   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.002809   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.503083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.001709   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.805953   56230 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.50.68:22: connect: no route to host
	I1219 03:54:32.806901   56230 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.50.68:22: connect: connection refused
	I1219 03:54:31.531396   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.033946   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.532063   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.033088   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.532601   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.032154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.532734   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.031403   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.532231   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:36.031798   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.501421   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.001823   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.501944   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.001242   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.502033   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:36.001834   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:36.503279   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.002832   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.501859   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.002218   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.914133   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:54:35.917629   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.918062   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:35.918084   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.918331   56230 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/config.json ...
	I1219 03:54:35.918603   56230 machine.go:94] provisionDockerMachine start ...
	I1219 03:54:35.921009   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.921341   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:35.921380   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.921581   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:35.921797   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:35.921810   56230 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:54:36.027619   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 03:54:36.027644   56230 buildroot.go:166] provisioning hostname "default-k8s-diff-port-168174"
	I1219 03:54:36.030973   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.031540   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.031597   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.031855   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:36.032105   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:36.032121   56230 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-168174 && echo "default-k8s-diff-port-168174" | sudo tee /etc/hostname
	I1219 03:54:36.154920   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-168174
	
	I1219 03:54:36.157818   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.158270   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.158298   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.158481   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:36.158705   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:36.158721   56230 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-168174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-168174/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-168174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:54:36.278763   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:54:36.278793   56230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5010/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5010/.minikube}
	I1219 03:54:36.278815   56230 buildroot.go:174] setting up certificates
	I1219 03:54:36.278825   56230 provision.go:84] configureAuth start
	I1219 03:54:36.282034   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.282595   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.282631   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.285039   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.285396   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.285421   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.285558   56230 provision.go:143] copyHostCerts
	I1219 03:54:36.285634   56230 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem, removing ...
	I1219 03:54:36.285655   56230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem
	I1219 03:54:36.285732   56230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem (1082 bytes)
	I1219 03:54:36.285873   56230 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem, removing ...
	I1219 03:54:36.285889   56230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem
	I1219 03:54:36.285939   56230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem (1123 bytes)
	I1219 03:54:36.286034   56230 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem, removing ...
	I1219 03:54:36.286044   56230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem
	I1219 03:54:36.286086   56230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem (1679 bytes)
	I1219 03:54:36.286187   56230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-168174 san=[127.0.0.1 192.168.50.68 default-k8s-diff-port-168174 localhost minikube]
	I1219 03:54:36.425832   56230 provision.go:177] copyRemoteCerts
	I1219 03:54:36.425892   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:54:36.428255   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.428656   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.428686   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.428839   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:36.519020   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 03:54:36.558591   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:54:36.592448   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:54:36.618754   56230 provision.go:87] duration metric: took 339.918165ms to configureAuth
	I1219 03:54:36.618782   56230 buildroot.go:189] setting minikube options for container-runtime
	I1219 03:54:36.618965   56230 config.go:182] Loaded profile config "default-k8s-diff-port-168174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:36.622080   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.622643   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.622690   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.622932   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:36.623146   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:36.623170   56230 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:54:36.870072   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:54:36.870099   56230 machine.go:97] duration metric: took 951.477635ms to provisionDockerMachine
	I1219 03:54:36.870113   56230 start.go:293] postStartSetup for "default-k8s-diff-port-168174" (driver="kvm2")
	I1219 03:54:36.870125   56230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:54:36.870211   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:54:36.873360   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.873824   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.873854   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.873997   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:36.957455   56230 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:54:36.962098   56230 info.go:137] Remote host: Buildroot 2025.02
	I1219 03:54:36.962123   56230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/addons for local assets ...
	I1219 03:54:36.962187   56230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/files for local assets ...
	I1219 03:54:36.962258   56230 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem -> 89372.pem in /etc/ssl/certs
	I1219 03:54:36.962365   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:54:36.973208   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /etc/ssl/certs/89372.pem (1708 bytes)
	I1219 03:54:37.001535   56230 start.go:296] duration metric: took 131.409863ms for postStartSetup
	I1219 03:54:37.001590   56230 fix.go:56] duration metric: took 17.775113489s for fixHost
	I1219 03:54:37.004880   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.005287   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.005312   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.005528   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:37.005820   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:37.005839   56230 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 03:54:37.113597   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766116477.079572846
	
	I1219 03:54:37.113621   56230 fix.go:216] guest clock: 1766116477.079572846
	I1219 03:54:37.113630   56230 fix.go:229] Guest: 2025-12-19 03:54:37.079572846 +0000 UTC Remote: 2025-12-19 03:54:37.001596336 +0000 UTC m=+17.891500693 (delta=77.97651ms)
	I1219 03:54:37.113645   56230 fix.go:200] guest clock delta is within tolerance: 77.97651ms
	I1219 03:54:37.113651   56230 start.go:83] releasing machines lock for "default-k8s-diff-port-168174", held for 17.887209269s
	I1219 03:54:37.116322   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.116867   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.116898   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.117549   56230 ssh_runner.go:195] Run: cat /version.json
	I1219 03:54:37.117645   56230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:54:37.121299   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.121532   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.121841   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.121885   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.122114   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:37.122168   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.122203   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.122439   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:37.200188   56230 ssh_runner.go:195] Run: systemctl --version
	I1219 03:54:37.236006   56230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:54:37.382400   56230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:54:37.391093   56230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:54:37.391172   56230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:54:37.412549   56230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:54:37.412595   56230 start.go:496] detecting cgroup driver to use...
	I1219 03:54:37.412701   56230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:54:37.432292   56230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:54:37.448705   56230 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:54:37.448757   56230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:54:37.464885   56230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:54:37.488524   56230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:54:37.648374   56230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:54:37.863271   56230 docker.go:234] disabling docker service ...
	I1219 03:54:37.863333   56230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:54:37.880285   56230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:54:37.895631   56230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:54:38.053642   56230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:54:38.210829   56230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:54:38.227130   56230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:54:38.248699   56230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:54:38.248763   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.260875   56230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1219 03:54:38.260939   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.273032   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.284839   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.296706   56230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:54:38.309100   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.320373   56230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.343213   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.355251   56230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:54:38.366693   56230 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 03:54:38.366745   56230 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 03:54:38.386325   56230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:54:38.397641   56230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:38.542778   56230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:54:38.656266   56230 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:54:38.656354   56230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:54:38.662225   56230 start.go:564] Will wait 60s for crictl version
	I1219 03:54:38.662286   56230 ssh_runner.go:195] Run: which crictl
	I1219 03:54:38.666072   56230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 03:54:38.702242   56230 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1219 03:54:38.702324   56230 ssh_runner.go:195] Run: crio --version
	I1219 03:54:38.730733   56230 ssh_runner.go:195] Run: crio --version
	I1219 03:54:38.760806   56230 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1219 03:54:38.764622   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:38.765017   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:38.765041   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:38.765207   56230 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1219 03:54:38.769555   56230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:54:38.784218   56230 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:54:38.784318   56230 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:54:38.784389   56230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:38.817654   56230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1219 03:54:38.817721   56230 ssh_runner.go:195] Run: which lz4
	I1219 03:54:38.821795   56230 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1219 03:54:38.826295   56230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1219 03:54:38.826327   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
	I1219 03:54:36.531538   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.031367   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.531677   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.031134   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.532312   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.032552   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.532678   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.031267   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.531858   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:41.032379   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.502453   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.002949   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.502152   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.002580   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.501440   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:41.002612   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:41.501822   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.002247   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.502196   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.002641   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.045060   56230 crio.go:462] duration metric: took 1.223302426s to copy over tarball
	I1219 03:54:40.045121   56230 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1219 03:54:41.702628   56230 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.657483082s)
	I1219 03:54:41.702653   56230 crio.go:469] duration metric: took 1.657571319s to extract the tarball
	I1219 03:54:41.702661   56230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1219 03:54:41.742396   56230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:41.778250   56230 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:54:41.778274   56230 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:54:41.778281   56230 kubeadm.go:935] updating node { 192.168.50.68 8444 v1.34.3 crio true true} ...
	I1219 03:54:41.778393   56230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-168174 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:54:41.778466   56230 ssh_runner.go:195] Run: crio config
	I1219 03:54:41.824084   56230 cni.go:84] Creating CNI manager for ""
	I1219 03:54:41.824114   56230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:41.824134   56230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:54:41.824161   56230 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.68 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-168174 NodeName:default-k8s-diff-port-168174 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:54:41.824332   56230 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.68
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-168174"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.68"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.68"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:54:41.824436   56230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:54:41.838181   56230 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:54:41.838263   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:54:41.850122   56230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1219 03:54:41.871647   56230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:54:41.891031   56230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1219 03:54:41.910970   56230 ssh_runner.go:195] Run: grep 192.168.50.68	control-plane.minikube.internal$ /etc/hosts
	I1219 03:54:41.915265   56230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.68	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:54:41.929042   56230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:42.077837   56230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:42.111492   56230 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174 for IP: 192.168.50.68
	I1219 03:54:42.111515   56230 certs.go:195] generating shared ca certs ...
	I1219 03:54:42.111529   56230 certs.go:227] acquiring lock for ca certs: {Name:mk433b742ffd55233764aebe3c6f298e0d1f02ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:42.111713   56230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key
	I1219 03:54:42.111782   56230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key
	I1219 03:54:42.111804   56230 certs.go:257] generating profile certs ...
	I1219 03:54:42.111942   56230 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/client.key
	I1219 03:54:42.112027   56230 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/apiserver.key.ed8a7a08
	I1219 03:54:42.112078   56230 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/proxy-client.key
	I1219 03:54:42.112201   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem (1338 bytes)
	W1219 03:54:42.112240   56230 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937_empty.pem, impossibly tiny 0 bytes
	I1219 03:54:42.112252   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:54:42.112280   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem (1082 bytes)
	I1219 03:54:42.112309   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:54:42.112361   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem (1679 bytes)
	I1219 03:54:42.112423   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem (1708 bytes)
	I1219 03:54:42.113420   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:54:42.154291   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:54:42.194006   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:54:42.221732   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:54:42.253007   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:54:42.280935   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1219 03:54:42.315083   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:54:42.342426   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:54:42.371444   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:54:42.402350   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem --> /usr/share/ca-certificates/8937.pem (1338 bytes)
	I1219 03:54:42.430533   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /usr/share/ca-certificates/89372.pem (1708 bytes)
	I1219 03:54:42.462798   56230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:54:42.483977   56230 ssh_runner.go:195] Run: openssl version
	I1219 03:54:42.490839   56230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.503565   56230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89372.pem /etc/ssl/certs/89372.pem
	I1219 03:54:42.514852   56230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.520693   56230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:46 /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.520739   56230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.528108   56230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:54:42.539720   56230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89372.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:54:42.550915   56230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.561679   56230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:54:42.572526   56230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.577725   56230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.577781   56230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.584786   56230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:54:42.596115   56230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:54:42.607332   56230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.618682   56230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8937.pem /etc/ssl/certs/8937.pem
	I1219 03:54:42.630292   56230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.635409   56230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:46 /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.635452   56230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.642710   56230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:54:42.654104   56230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8937.pem /etc/ssl/certs/51391683.0
	I1219 03:54:42.666207   56230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:54:42.671385   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:54:42.678373   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:54:42.685534   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:54:42.692140   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:54:42.698549   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:54:42.705279   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:54:42.712285   56230 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:42.712383   56230 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:54:42.712433   56230 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:54:42.745951   56230 cri.go:92] found id: ""
	I1219 03:54:42.746000   56230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:54:42.757185   56230 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:54:42.757201   56230 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:54:42.757240   56230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:54:42.768155   56230 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:54:42.769156   56230 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-168174" does not appear in /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:42.769826   56230 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-5010/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-168174" cluster setting kubeconfig missing "default-k8s-diff-port-168174" context setting]
	I1219 03:54:42.770666   56230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:42.772207   56230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:54:42.782776   56230 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.50.68
	I1219 03:54:42.782799   56230 kubeadm.go:1161] stopping kube-system containers ...
	I1219 03:54:42.782811   56230 cri.go:57] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1219 03:54:42.782853   56230 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:54:42.827373   56230 cri.go:92] found id: ""
	I1219 03:54:42.827451   56230 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1219 03:54:42.855644   56230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:54:42.867640   56230 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:54:42.867664   56230 kubeadm.go:158] found existing configuration files:
	
	I1219 03:54:42.867713   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1219 03:54:42.879242   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:54:42.879345   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:54:42.890737   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1219 03:54:42.900979   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:54:42.901033   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:54:42.911989   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1219 03:54:42.922081   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:54:42.922121   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:54:42.933197   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1219 03:54:42.943650   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:54:42.943706   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:54:42.954819   56230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:54:42.965503   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:43.022499   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:41.533216   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.031785   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.531762   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.032044   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.531965   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.032348   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.532701   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.032707   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.531729   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:46.032031   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.502226   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.002160   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.502401   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.002719   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.502332   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:46.001536   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:46.501990   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.002547   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.501403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:48.002631   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.652743   56230 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.630210852s)
	I1219 03:54:44.652817   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:44.912221   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:44.996004   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:45.067644   56230 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:54:45.067725   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:45.568080   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:46.068722   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:46.568114   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:47.068013   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:47.117129   56230 api_server.go:72] duration metric: took 2.049494189s to wait for apiserver process to appear ...
	I1219 03:54:47.117153   56230 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:47.117174   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:47.117680   56230 api_server.go:269] stopped: https://192.168.50.68:8444/healthz: Get "https://192.168.50.68:8444/healthz": dial tcp 192.168.50.68:8444: connect: connection refused
	I1219 03:54:47.617323   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:46.534635   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.031616   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.531182   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:48.032359   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:48.532986   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.031214   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.532385   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.032130   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.532478   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.031638   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.988621   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:49.988647   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:49.988661   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:50.015383   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:50.015404   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:50.117699   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:50.129872   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:50.129895   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:50.617488   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:50.622220   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:50.622255   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:51.117929   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:51.126710   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:51.126741   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:51.617345   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:51.622349   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 200:
	ok
	I1219 03:54:51.628913   56230 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:51.628947   56230 api_server.go:131] duration metric: took 4.511785965s to wait for apiserver health ...
	I1219 03:54:51.628957   56230 cni.go:84] Creating CNI manager for ""
	I1219 03:54:51.628965   56230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:51.630494   56230 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:54:51.631426   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:54:51.647385   56230 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:54:51.669320   56230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:51.675232   56230 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:51.675273   56230 system_pods.go:61] "coredns-66bc5c9577-dnfcc" [b8ee66a7-b129-4499-aad8-a988ecea241c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:51.675288   56230 system_pods.go:61] "etcd-default-k8s-diff-port-168174" [af7e1768-95b9-431e-877a-63138a91dcdc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:51.675298   56230 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-168174" [a59f79b0-b99b-4092-8b4a-773ff0a04569] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:51.675318   56230 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-168174" [405651c9-cdc7-414f-a512-f11b7b580eb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:51.675328   56230 system_pods.go:61] "kube-proxy-zs4wg" [2212782c-32ab-4355-8dda-9117953b0223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:51.675338   56230 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-168174" [a524e706-e546-4aa4-848c-f52eb802ffb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:51.675347   56230 system_pods.go:61] "metrics-server-746fcd58dc-xjkbx" [c6e2f2b2-7b94-4ff2-85ba-e79d72b30655] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:51.675358   56230 system_pods.go:61] "storage-provisioner" [ec1d1888-a950-48d5-9b73-440e7556818b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:54:51.675366   56230 system_pods.go:74] duration metric: took 6.023523ms to wait for pod list to return data ...
	I1219 03:54:51.675387   56230 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:51.680456   56230 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:51.680483   56230 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:51.680500   56230 node_conditions.go:105] duration metric: took 5.106096ms to run NodePressure ...
	I1219 03:54:51.680558   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:51.941503   56230 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1219 03:54:51.945528   56230 kubeadm.go:744] kubelet initialised
	I1219 03:54:51.945566   56230 kubeadm.go:745] duration metric: took 4.028139ms waiting for restarted kubelet to initialise ...
	I1219 03:54:51.945597   56230 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:54:51.967660   56230 ops.go:34] apiserver oom_adj: -16
	I1219 03:54:51.967680   56230 kubeadm.go:602] duration metric: took 9.210474475s to restartPrimaryControlPlane
	I1219 03:54:51.967689   56230 kubeadm.go:403] duration metric: took 9.255411647s to StartCluster
	I1219 03:54:51.967705   56230 settings.go:142] acquiring lock: {Name:mkb131693a40b9aa50c302192272518c3561c861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:51.967787   56230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:51.970216   56230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:51.970558   56230 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:54:51.970693   56230 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:54:51.970789   56230 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970812   56230 config.go:182] Loaded profile config "default-k8s-diff-port-168174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:51.970826   56230 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-168174"
	I1219 03:54:51.970825   56230 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970846   56230 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970858   56230 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-168174"
	I1219 03:54:51.970858   56230 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970884   56230 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-168174"
	W1219 03:54:51.970893   56230 addons.go:248] addon dashboard should already be in state true
	I1219 03:54:51.970919   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	W1219 03:54:51.970836   56230 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:54:51.970978   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	I1219 03:54:51.970861   56230 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-168174"
	W1219 03:54:51.971035   56230 addons.go:248] addon metrics-server should already be in state true
	I1219 03:54:51.971057   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	I1219 03:54:51.971960   56230 out.go:179] * Verifying Kubernetes components...
	I1219 03:54:51.973008   56230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:51.974650   56230 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:51.974726   56230 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:54:51.974952   56230 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:54:51.975006   56230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:54:48.502712   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.001711   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.501592   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.001601   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.501313   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.002296   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.502360   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.002651   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.503108   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.002165   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.975433   56230 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-168174"
	W1219 03:54:51.975454   56230 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:54:51.975493   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	I1219 03:54:51.975992   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:54:51.976010   56230 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:54:51.976037   56230 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:51.976049   56230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:54:51.978029   56230 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:51.978047   56230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:54:51.979030   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.979580   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.979617   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.979992   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:51.980624   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.980627   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981054   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.981088   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981091   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.981123   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981299   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:51.981430   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981442   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:51.981908   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.981931   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.982118   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:52.329267   56230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:52.362110   56230 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-168174" to be "Ready" ...
	I1219 03:54:52.365712   56230 node_ready.go:49] node "default-k8s-diff-port-168174" is "Ready"
	I1219 03:54:52.365740   56230 node_ready.go:38] duration metric: took 3.595186ms for node "default-k8s-diff-port-168174" to be "Ready" ...
	I1219 03:54:52.365758   56230 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:54:52.365821   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:52.390728   56230 api_server.go:72] duration metric: took 420.108978ms to wait for apiserver process to appear ...
	I1219 03:54:52.390759   56230 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:52.390781   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:52.397481   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 200:
	ok
	I1219 03:54:52.398595   56230 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:52.398619   56230 api_server.go:131] duration metric: took 7.851716ms to wait for apiserver health ...
	I1219 03:54:52.398634   56230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:52.403556   56230 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:52.403621   56230 system_pods.go:61] "coredns-66bc5c9577-dnfcc" [b8ee66a7-b129-4499-aad8-a988ecea241c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:52.403638   56230 system_pods.go:61] "etcd-default-k8s-diff-port-168174" [af7e1768-95b9-431e-877a-63138a91dcdc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:52.403653   56230 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-168174" [a59f79b0-b99b-4092-8b4a-773ff0a04569] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:52.403664   56230 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-168174" [405651c9-cdc7-414f-a512-f11b7b580eb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:52.403676   56230 system_pods.go:61] "kube-proxy-zs4wg" [2212782c-32ab-4355-8dda-9117953b0223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:52.403690   56230 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-168174" [a524e706-e546-4aa4-848c-f52eb802ffb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:52.403705   56230 system_pods.go:61] "metrics-server-746fcd58dc-xjkbx" [c6e2f2b2-7b94-4ff2-85ba-e79d72b30655] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:52.403714   56230 system_pods.go:61] "storage-provisioner" [ec1d1888-a950-48d5-9b73-440e7556818b] Running
	I1219 03:54:52.403725   56230 system_pods.go:74] duration metric: took 5.080532ms to wait for pod list to return data ...
	I1219 03:54:52.403737   56230 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:54:52.406964   56230 default_sa.go:45] found service account: "default"
	I1219 03:54:52.406989   56230 default_sa.go:55] duration metric: took 3.241415ms for default service account to be created ...
	I1219 03:54:52.406999   56230 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:54:52.412763   56230 system_pods.go:86] 8 kube-system pods found
	I1219 03:54:52.412787   56230 system_pods.go:89] "coredns-66bc5c9577-dnfcc" [b8ee66a7-b129-4499-aad8-a988ecea241c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:52.412797   56230 system_pods.go:89] "etcd-default-k8s-diff-port-168174" [af7e1768-95b9-431e-877a-63138a91dcdc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:52.412804   56230 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-168174" [a59f79b0-b99b-4092-8b4a-773ff0a04569] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:52.412810   56230 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-168174" [405651c9-cdc7-414f-a512-f11b7b580eb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:52.412816   56230 system_pods.go:89] "kube-proxy-zs4wg" [2212782c-32ab-4355-8dda-9117953b0223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:52.412821   56230 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-168174" [a524e706-e546-4aa4-848c-f52eb802ffb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:52.412826   56230 system_pods.go:89] "metrics-server-746fcd58dc-xjkbx" [c6e2f2b2-7b94-4ff2-85ba-e79d72b30655] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:52.412830   56230 system_pods.go:89] "storage-provisioner" [ec1d1888-a950-48d5-9b73-440e7556818b] Running
	I1219 03:54:52.412837   56230 system_pods.go:126] duration metric: took 5.832618ms to wait for k8s-apps to be running ...
	I1219 03:54:52.412847   56230 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:54:52.412890   56230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:54:52.437131   56230 system_svc.go:56] duration metric: took 24.267658ms WaitForService to wait for kubelet
	I1219 03:54:52.437166   56230 kubeadm.go:587] duration metric: took 466.551246ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:54:52.437188   56230 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:52.440753   56230 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:52.440776   56230 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:52.440789   56230 node_conditions.go:105] duration metric: took 3.595658ms to run NodePressure ...
	I1219 03:54:52.440804   56230 start.go:242] waiting for startup goroutines ...
	I1219 03:54:52.571235   56230 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:54:52.579720   56230 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:54:52.588696   56230 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:54:52.607999   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:52.623079   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:54:52.623103   56230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:54:52.632201   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:52.689775   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:54:52.689802   56230 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:54:52.755241   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:52.755280   56230 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:54:52.860818   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:51.531836   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.032945   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.532771   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.031681   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.532510   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.032369   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.532915   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.031905   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.531152   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.032011   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.502165   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.002813   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.501582   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.002986   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.501711   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.000984   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.502399   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.002200   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.502369   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.002000   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.655285   56230 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (2.066552827s)
	I1219 03:54:54.655390   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:54:54.655405   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.047371795s)
	I1219 03:54:54.655528   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.023298979s)
	I1219 03:54:54.655657   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.794802456s)
	I1219 03:54:54.655684   56230 addons.go:500] Verifying addon metrics-server=true in "default-k8s-diff-port-168174"
	I1219 03:54:57.969258   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.313828747s)
	I1219 03:54:57.969346   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:58.498709   56230 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-168174"
	I1219 03:54:58.501734   56230 out.go:179] * Verifying dashboard addon...
	I1219 03:54:58.504348   56230 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:54:58.510036   56230 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:54:58.510056   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.010436   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.532022   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.031585   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.531985   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.032925   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.533378   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.032504   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.530653   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.031045   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.531549   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.030879   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.502926   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.001807   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.501672   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.001239   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.501991   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.001622   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.501692   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.002517   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.502331   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.001757   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.508121   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.008244   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.507884   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.008223   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.507861   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.012677   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.507898   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.008121   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.508367   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.008842   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.531235   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.031845   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.531542   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.030822   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.532087   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.032140   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.532095   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.032183   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.532546   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.031699   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.501262   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.001782   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.501640   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.002705   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.501849   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.001647   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.502225   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.002170   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.502397   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.003244   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.508346   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.007493   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.507987   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.007825   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.507635   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.008062   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.507047   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.008442   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.510089   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.008180   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.536198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.032221   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.532227   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.032198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.531813   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.031889   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.531666   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.031122   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.532149   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.031983   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.502642   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.001743   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.502066   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.002044   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.502017   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.002386   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.502467   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.002107   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.502677   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.002161   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.507112   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.008461   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.508312   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.008611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.508384   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.008280   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.508541   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.008623   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.508431   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.009349   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.532619   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.031875   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.532589   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.031244   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.531877   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.031690   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.531758   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.032196   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.531803   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.030943   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.502018   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.002044   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.502072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.002330   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.502958   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.001850   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.501605   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.001853   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.501780   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.001784   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.508124   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.008333   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.508545   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.008130   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.507985   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.007539   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.508141   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.007919   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.507523   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.008842   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.532418   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.032219   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.532547   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.032233   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.532551   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.033166   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.531532   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.031971   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.532050   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.032787   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.501956   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.002220   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.502226   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.003355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.501800   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.001708   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.501127   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.003195   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.502775   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.002072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.507432   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.008746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.508268   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.008770   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.508749   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.009746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.509595   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.008351   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.508700   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.009427   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.532398   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.033297   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.531966   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.032953   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.532813   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.032632   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.531743   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.031446   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.531999   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.032229   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.502200   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.002490   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.502281   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.002814   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.502200   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.001250   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.502303   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.003201   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.501897   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.001740   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.508429   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.008390   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.507941   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.007624   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.508269   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.008250   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.508598   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.008371   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.508380   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.008493   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.531979   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.031753   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.531087   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.031427   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.533856   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.032558   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.532153   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.031923   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.531960   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.032601   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.501692   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.001922   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.501325   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.003828   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.502896   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.002912   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.501760   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.001551   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.503707   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.002109   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.508499   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.009212   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.508512   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.008223   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.508681   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.008636   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.508533   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.008248   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.507749   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.010179   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.531439   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.033650   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.532006   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.033362   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.532163   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.032485   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.532885   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.032795   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.531588   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.032179   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.502338   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.001955   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.502091   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.002307   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.502793   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.000849   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.501606   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.001431   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.502037   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.001873   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.507899   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.009735   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.508708   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.008927   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.508321   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.008289   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.507348   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.009029   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.507232   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.007368   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.532210   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.032304   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.531955   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.031259   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.532301   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.032157   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.531594   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.032495   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.532008   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.032133   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.501770   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.002435   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.502300   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.002307   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.501724   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.002293   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.503636   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.001410   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.504029   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.001789   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.508096   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.009356   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.507852   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.007460   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.508444   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.008364   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.507697   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.008880   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.508861   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.008835   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.532091   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.032010   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.531306   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.031852   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.531186   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.032131   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.531205   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.032529   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.532677   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.033016   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.502472   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.001435   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.502091   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.001734   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.501352   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.002340   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.502315   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.002534   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.501024   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.001249   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.507519   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.008950   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.507774   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.009594   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.507884   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.007928   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.507777   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.009168   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.507455   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.009287   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.532161   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.032066   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.531975   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.031583   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.532654   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.033122   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.531676   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.031185   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.532468   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.032385   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.501786   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.002482   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.502524   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.001342   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.502134   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.003763   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.502136   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.001766   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.502345   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.001599   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.508543   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.009242   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.508054   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.009144   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.508104   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.008088   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.507250   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.009098   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.507611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.010519   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.531780   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.031001   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.532489   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.032242   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.536320   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.033455   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.532129   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.031767   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.531204   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.031365   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.503558   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.001726   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.501144   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.001613   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.502734   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.002274   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.501831   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.001426   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.503884   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.001283   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.508611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.009353   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.507657   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.007664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.507861   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.007544   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.507689   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.008330   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.508469   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.009715   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.532345   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.032801   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.531689   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.032877   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.532081   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.032107   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.531920   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.031409   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.532046   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.032408   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.501828   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.001518   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.502563   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.002507   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.501333   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.002564   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.502379   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.001485   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.501810   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.001402   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.508191   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.008241   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.508833   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.008453   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.508563   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.008613   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.509524   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.008844   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.507854   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.007055   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.532493   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.033676   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.532206   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.031784   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.532118   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.032496   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.532286   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.032724   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.533137   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.031294   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.502666   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.001524   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.501177   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.001644   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.503328   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.002433   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.502361   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.002735   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.501301   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.001765   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.508242   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.008660   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.507962   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.008796   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.508749   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.009651   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.508080   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.008550   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.509473   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.008511   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.533457   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.032379   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.532473   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.032865   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.531464   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.032764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.531236   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.032148   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.532471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.032216   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.502684   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.002188   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.503237   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.001912   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.501622   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.001891   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.502012   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.001650   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.502856   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.001921   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.507699   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.008027   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.508703   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.008209   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.508178   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.008432   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.509550   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.008319   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.507828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.007561   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.531544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.032519   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.532198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.032915   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.531514   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.032723   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.531505   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.033182   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.531615   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.032916   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.501854   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.001080   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.503363   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.002618   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.502840   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.000881   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.501714   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.002610   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.502008   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.001866   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.508549   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.007753   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.508465   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.008319   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.508222   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.007904   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.508163   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.008464   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.508145   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.007728   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.531667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.033191   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.531547   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.032788   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.532591   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.033086   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.531237   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.032101   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.532279   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.032764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.501636   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.001241   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.501915   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.001761   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.501797   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.001851   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.502732   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.001114   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.502538   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.001630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.508503   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.009432   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.508442   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.008564   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.508754   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.008668   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.508947   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.007984   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.507426   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.008776   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.531412   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.031826   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.531169   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.032838   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.531368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.033085   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.531343   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.032505   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.532373   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.032078   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.501962   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.001801   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.502380   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.001940   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.501661   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.001355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.501727   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.002704   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.502515   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.001261   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.508926   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.008697   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.508155   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.008403   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.509752   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.009152   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.507692   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.008539   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.508833   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.008403   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.532212   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.031709   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.531512   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.032880   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.531683   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.032225   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.532187   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.032017   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.530954   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.031969   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.502513   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.001736   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.502118   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.001728   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.502479   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.002783   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.502414   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.002781   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.501809   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.002598   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.507936   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.007414   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.508924   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.007756   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.509607   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.008188   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.508901   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.009164   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.507936   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.007349   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.532294   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.033050   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.532115   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.031971   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.531279   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.032256   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.531863   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.031763   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.531164   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.031290   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.502730   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.001984   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.502287   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.002224   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.502985   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.000948   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.501630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.001169   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.502075   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.002834   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.508225   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.007739   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.508108   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.008481   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.508746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.008298   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.507944   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.008428   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.507905   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.007728   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.531448   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.032595   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.532096   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.031394   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.532851   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.032534   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.532843   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.031994   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.533667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.033061   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.501275   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.003274   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.502492   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.002263   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.501814   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.002188   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.502456   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.002449   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.503413   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.002514   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.508385   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.008219   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.509237   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.007998   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.507734   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.008610   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.509142   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.008330   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.507609   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.009119   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.531626   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.032337   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.532298   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.032378   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.531679   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.032529   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.532155   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.031828   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.531299   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.031239   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.502830   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.001989   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.502331   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.002798   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.502197   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.001852   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.501952   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.001753   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.501421   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.002328   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.508315   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.008862   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.508067   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.008030   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.507755   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.008786   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.507672   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.009365   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.509016   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.007277   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.531667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.031610   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.532096   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.032319   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.532500   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.031773   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.531561   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.032598   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.531974   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.031362   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.502152   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.001130   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.502479   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.002251   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.501762   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.000846   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.502253   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.002765   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.502160   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.001409   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.508190   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.008459   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.508829   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.007664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.509469   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.009747   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.509579   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.009682   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.508738   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.008970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.532197   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.031685   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.532322   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.031885   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.531778   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.031643   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.531467   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.031815   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.531155   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.031720   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.503475   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.001639   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.501436   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.002712   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.501897   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.001181   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.501530   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.000985   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.501730   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.001514   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.508263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.007505   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.508726   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.008230   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.508664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.008997   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.507428   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.008379   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.508549   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.007995   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.531536   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.032617   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.535990   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.031685   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.533156   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.031587   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.532830   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.031367   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.532930   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.031943   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.502386   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.002215   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.503037   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.001428   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.502319   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.002311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.502140   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.002283   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.502150   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.002240   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.507946   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.008416   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.508631   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.008561   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.508912   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.008658   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.509386   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.008665   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.509011   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.008072   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.533032   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.032143   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.531588   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.032371   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.533496   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.033023   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.531133   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.032394   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.532243   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.031898   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.502405   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.001877   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.505174   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.002029   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.502125   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.001824   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.501660   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.002257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.502497   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.002911   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.509042   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.008740   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.507989   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.007873   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.508285   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.007091   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.508238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.508597   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.009516   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.531381   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.032162   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.531649   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.032718   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.532156   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.033496   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.533930   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.032368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.532625   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.032661   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.501952   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.001604   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.501905   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.002311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.501777   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.001546   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.502154   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.002455   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.503055   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.001472   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.508050   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.008080   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.507970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.007844   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.508056   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.007765   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.508456   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.007981   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.508855   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.008604   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.532081   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.032702   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.531078   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.031663   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.531993   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.033077   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.531457   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.032927   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.531699   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.031008   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.502839   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.001682   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.501484   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.003428   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.502649   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.002047   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.501936   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.001951   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.502955   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.002709   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.509628   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.008629   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.509037   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.008098   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.508408   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.009392   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.507832   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.008540   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.509468   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.008988   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.532091   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.032487   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.532767   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.032348   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.533265   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.032832   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.533225   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.032480   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.531859   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.031535   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.502389   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.002611   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.501620   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.002058   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.502778   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.002073   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.501287   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.001492   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.503034   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.002058   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.507218   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.008007   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.507903   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.008002   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.508538   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.009106   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.509031   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.008019   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.508250   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.009604   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.532463   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.032668   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.531757   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.031273   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.533278   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.032950   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.531375   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.032433   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.532764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.031941   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.501829   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.001397   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.502802   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.001851   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.503206   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.001481   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.502653   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.002180   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.501887   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.001927   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.509024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.007589   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.509073   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.008555   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.508449   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.008256   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.508501   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.009916   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.508490   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.008336   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.531904   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.031168   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.532025   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.032276   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.531968   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.032764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.531973   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.031624   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.532201   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.032129   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.502278   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.001507   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.501338   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.002753   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.501620   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.001545   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.502545   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.001650   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.501704   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.001060   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.508006   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.007837   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.509358   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.508132   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.007983   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.508981   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.007803   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.507769   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.009970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.532685   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.033023   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.531348   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.031614   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.533370   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.032795   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.531237   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.032033   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.532778   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.031294   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.502337   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.002204   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.501845   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.002344   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.503195   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.002894   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.501979   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.002008   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.501981   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.001740   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.507806   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.009357   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.508695   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.008959   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.509725   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.008245   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.507606   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.008218   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.507870   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.007087   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.532257   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.032024   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.532220   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.031647   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.532123   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.032889   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.532444   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.032621   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.532943   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.031712   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.501355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.002083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.501469   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.002554   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.501408   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.002216   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.502543   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.001754   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.501454   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.002870   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.507033   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.007862   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.509097   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.008460   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.509108   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.007794   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.508514   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.009784   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.508154   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.008565   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.531552   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.032724   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.531968   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.031728   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.531786   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.032471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.531802   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.032162   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.532320   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.031297   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.503203   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.002682   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.502066   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.001775   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.501403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.002298   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.502073   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.001483   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.501639   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.002266   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.508296   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.008881   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.508078   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.007871   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.508564   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.008609   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.507625   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.008815   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.507996   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.009033   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.531916   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.032003   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.535669   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.032260   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.533368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.032732   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.532734   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.031076   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.531706   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.031411   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.502350   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.002202   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.502113   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.002611   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.501323   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.002251   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.501726   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.003470   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.502490   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.002224   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.507379   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.007665   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.508534   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.009007   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.509344   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.007746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.508532   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.009346   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.507367   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.009828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.532471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.032182   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.531696   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.031891   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.531523   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.032527   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.531544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.033055   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.532251   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.032012   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.501701   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.001815   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.501403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.001721   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.502408   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.006350   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.502718   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.000975   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.502050   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.001993   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.507665   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.010022   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.507891   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.017962   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.509387   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.009499   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.508592   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.007712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.509159   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.008024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.532417   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.032702   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.531960   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.032030   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.532438   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.032562   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.532541   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.031906   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.533707   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.031481   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.501333   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.002706   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.501390   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.003083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.501477   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.003243   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.502051   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.002119   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.502250   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.001892   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.508467   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.007934   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.508461   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.009263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.508676   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.007597   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.508263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.008661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.508545   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.008653   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.533009   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.032493   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.532027   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.032116   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.531261   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.034181   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.531702   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.032409   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.533808   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.031246   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.501444   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.002084   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.501717   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.002397   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.502329   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.002161   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.501724   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.001096   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.501676   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.001373   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.508793   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.009558   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.508307   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.008745   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.508478   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.009365   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.508285   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.008394   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.507659   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.008883   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.531671   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.032663   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.532654   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.032443   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.531860   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.031786   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.532418   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.032031   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.531026   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.031184   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.502311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.001761   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.501921   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.001779   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.502884   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.000815   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.502204   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.002552   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.502487   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.002005   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.509248   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.008315   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.507712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.009764   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.509368   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.007428   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.508548   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.008954   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.508930   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.008936   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.532311   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.032156   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.531768   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.031259   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.532112   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.032440   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.533083   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.031470   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.533077   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.031626   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.503116   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.002138   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.502257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.002721   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.501511   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.002183   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.502306   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.002714   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.501224   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.003247   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.508715   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.008752   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.509114   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.007677   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.508804   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.009618   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.508120   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.007885   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.507480   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.008978   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.532146   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.031615   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.532552   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.031381   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.532386   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.032461   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.533200   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.032375   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.531718   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.030828   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.502028   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.001762   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.501418   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.002914   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.501869   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.001896   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.501339   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.002565   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.502667   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.001134   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.507828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.008203   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.508364   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.008929   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.508117   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.007662   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.507899   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.008710   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.507212   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.008474   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.532845   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.032290   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.532646   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.031957   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.531378   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.032264   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.531928   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.031473   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.532386   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.032382   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.502231   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.002752   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.500970   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.000924   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.501030   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.002189   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.502781   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.002623   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.501117   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.001792   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.508109   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.008892   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.508228   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.007643   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.508278   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.009399   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.508216   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.008474   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.507952   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.008596   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.532465   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.032800   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.531643   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.031616   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.533745   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.031460   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.532616   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.032471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.532228   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.031437   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.501355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.001764   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.501298   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.003052   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.502950   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.001770   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.501738   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.003204   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.503749   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.000964   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.508615   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.009187   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.507594   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.009258   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.508166   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.008876   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.508828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.009323   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.507635   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.008857   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.532499   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.033303   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.532140   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.031451   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.532012   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.031739   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.531969   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.031026   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.531884   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.032850   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.501466   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.002962   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.501319   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.002095   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.501455   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.002904   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.502079   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.002351   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.502139   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.002366   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.507536   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.009458   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.508342   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.008114   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.507689   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.008772   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.508175   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.008253   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.508521   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.010486   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.531019   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.032116   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.531731   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.031746   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.531610   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.032124   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.531488   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.032358   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.532561   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.032192   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.502021   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.001431   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.502831   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.001874   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.501461   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.002135   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.502101   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.002403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.501826   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.001388   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.508693   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.008934   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.507098   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.007956   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.508938   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.007971   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.508613   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.009088   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.507422   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.008448   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.531909   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.031872   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.532556   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.032306   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.532154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.032667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.531742   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.032077   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.531946   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.033451   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.502067   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.002320   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.501957   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.002135   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.501241   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.002784   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.502988   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.004826   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.502313   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.002638   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.507745   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.009163   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.508092   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.008607   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.508116   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.008371   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.507434   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.008847   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.507621   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.008655   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.532124   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.032109   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.531627   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.031388   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.532769   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.031521   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.531483   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.032091   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.532187   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.031753   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.502460   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.002540   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.501945   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.002223   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.501542   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.001659   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.501286   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.002482   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.502722   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.001266   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.507988   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.009496   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.509180   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.008698   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.508772   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.008904   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.508816   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.009066   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.507818   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.008395   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.531785   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.031722   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.531144   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.031857   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.531058   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.032168   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.532777   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.032608   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.531658   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.032994   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.501701   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.002308   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.502069   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.002072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.501731   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.002148   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.503078   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.003123   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.501899   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.002103   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.507702   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.009409   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.508752   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.009166   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.508117   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.009342   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.508229   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.007650   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.514151   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.008149   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.531183   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.030952   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.531064   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.032714   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.532410   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.031666   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.531454   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.032368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.532161   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.031779   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.502176   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.001419   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.502257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.002485   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.501904   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.001645   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.501262   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.002789   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.502720   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.001933   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.507580   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.008671   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.508761   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.009888   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.508049   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.009018   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.508299   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.009024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.507584   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.008065   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.530966   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.031880   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.531265   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.031652   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.532860   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.031804   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.532296   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.031908   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.531566   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.031333   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.501384   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.002328   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.501432   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.002402   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.502445   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.004922   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.501916   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.002619   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.501038   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.001821   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.507960   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.008882   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.508735   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.009370   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.508266   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.009541   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.508167   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.008293   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.509228   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.008514   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.531404   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.032313   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.532704   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.033420   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.532159   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.032178   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.531613   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.035741   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.532501   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.033104   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.502173   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.002026   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.501239   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.001300   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.503227   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.001826   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.501434   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.003235   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.502432   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.002356   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.507884   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.008334   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.508168   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.008274   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.508025   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.008228   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.507713   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.008537   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.508684   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.009919   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.532599   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.035420   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.531992   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.031944   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.531194   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.032224   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.531672   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.031544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.531967   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.031448   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.501782   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.001444   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.503454   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.002767   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.501906   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.001726   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.502123   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.005942   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.501817   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.001941   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.507853   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.008476   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.508667   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.008722   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.509046   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.008778   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.508906   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.008492   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.508647   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.007815   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.532200   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.031966   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.531791   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.033536   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.532652   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.032201   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.531928   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.033359   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.533670   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.032187   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.501934   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.002902   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.501267   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.002601   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.501489   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.002545   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.501360   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.002042   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.503032   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.001085   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.507611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.008196   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.509732   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.009055   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.508388   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.007995   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.507537   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.008854   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.508167   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.008019   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.531647   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.034444   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.532628   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.032333   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.531736   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.032056   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.531916   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.031464   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.532198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.032089   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.501603   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.001216   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.502879   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.001292   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.501341   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.002410   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.502804   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.002021   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.502279   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.002340   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.507566   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.008774   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.509162   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.009209   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.507648   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.009824   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.508631   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.009013   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.507653   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.008464   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.531694   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.032157   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.532431   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.031890   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.533074   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.032602   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.531803   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.032839   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.531064   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.033390   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.502372   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.001862   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.502294   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.001477   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.503184   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.002218   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.502643   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.001877   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.503311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.002436   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.508296   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.008304   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.508381   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.008490   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.508829   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.007834   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.508400   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.008794   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.509376   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.008146   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.531920   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.033659   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.532892   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.031391   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.532537   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.033029   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.530956   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.031585   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.533148   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.031532   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.502341   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.002087   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.501994   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.001651   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.501441   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.002140   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.501765   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.001241   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.502479   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.002437   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.508235   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.008483   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.508534   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.008744   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.508702   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.008924   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.507985   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.007421   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.507911   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.008590   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.532045   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.031418   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.532867   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.031333   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.532360   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.032704   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.531535   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.033276   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.532090   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.032674   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.503195   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.001544   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.501650   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.001446   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.503141   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.001293   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.501933   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.001485   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.501393   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.001793   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.508830   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.008286   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.508322   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.008679   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.509263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.008010   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.507661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.008954   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.508712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.008648   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.531115   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.033681   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.532204   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.031525   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.532706   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.031154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.531400   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.032686   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.531016   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.031694   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.500799   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.001437   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.503087   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.001262   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.502070   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.001597   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.501748   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.000952   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.503068   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.002924   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.508721   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.009360   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.507561   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.008024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.509438   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.008003   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.509182   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.007694   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.509204   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.008075   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.531475   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.032236   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.531623   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.032627   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.531328   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.032263   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.531544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.031759   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.531968   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.031169   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.502523   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.001089   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.502166   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.002297   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.501900   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.002177   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.501990   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.001761   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.503411   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.001888   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.508661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.008645   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.509700   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.509485   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.008711   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.508528   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.009157   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.508329   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.008371   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.532470   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.033506   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.532332   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.032618   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.532408   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.032700   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.532680   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.030763   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.531486   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.032694   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.501870   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.001255   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.502146   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.001892   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.502373   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.001923   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.502476   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.001982   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.502446   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.003222   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.508346   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.008513   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.509470   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.009002   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.508067   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.007514   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.508798   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.008828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.508496   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.531146   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.031591   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.532375   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.033082   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.531649   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.031902   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.532588   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.032880   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.532136   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.028606   55595 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 04:00:01.028642   55595 kapi.go:107] duration metric: took 6m0.000598506s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 04:00:01.028754   55595 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 04:00:01.030295   55595 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1219 04:00:01.031288   55595 addons.go:546] duration metric: took 6m6.695311639s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1219 04:00:01.031318   55595 start.go:247] waiting for cluster config update ...
	I1219 04:00:01.031329   55595 start.go:256] writing updated cluster config ...
	I1219 04:00:01.031596   55595 ssh_runner.go:195] Run: rm -f paused
	I1219 04:00:01.039401   55595 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:01.043907   55595 pod_ready.go:83] waiting for pod "coredns-7d764666f9-s7729" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.050711   55595 pod_ready.go:94] pod "coredns-7d764666f9-s7729" is "Ready"
	I1219 04:00:01.050733   55595 pod_ready.go:86] duration metric: took 6.803187ms for pod "coredns-7d764666f9-s7729" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.053765   55595 pod_ready.go:83] waiting for pod "etcd-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.058213   55595 pod_ready.go:94] pod "etcd-no-preload-298059" is "Ready"
	I1219 04:00:01.058234   55595 pod_ready.go:86] duration metric: took 4.447718ms for pod "etcd-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.060300   55595 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.065142   55595 pod_ready.go:94] pod "kube-apiserver-no-preload-298059" is "Ready"
	I1219 04:00:01.065166   55595 pod_ready.go:86] duration metric: took 4.840116ms for pod "kube-apiserver-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.067284   55595 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.445171   55595 pod_ready.go:94] pod "kube-controller-manager-no-preload-298059" is "Ready"
	I1219 04:00:01.445200   55595 pod_ready.go:86] duration metric: took 377.900542ms for pod "kube-controller-manager-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.645417   55595 pod_ready.go:83] waiting for pod "kube-proxy-mdfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.044330   55595 pod_ready.go:94] pod "kube-proxy-mdfxl" is "Ready"
	I1219 04:00:02.044377   55595 pod_ready.go:86] duration metric: took 398.907218ms for pod "kube-proxy-mdfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.245766   55595 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.645879   55595 pod_ready.go:94] pod "kube-scheduler-no-preload-298059" is "Ready"
	I1219 04:00:02.645937   55595 pod_ready.go:86] duration metric: took 400.143888ms for pod "kube-scheduler-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.645954   55595 pod_ready.go:40] duration metric: took 1.606522986s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:02.697158   55595 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 04:00:02.698980   55595 out.go:179] * Done! kubectl is now configured to use "no-preload-298059" cluster and "default" namespace by default
	I1219 03:59:58.502543   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.001139   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.501649   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.001415   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.502374   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.002272   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.502072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.002694   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.501377   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.002499   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.508999   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.009465   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.508462   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.008805   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.509068   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.007682   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.508807   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.009533   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.509171   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.008344   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.501482   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.002080   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.502514   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.002257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.502741   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.001565   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.502968   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.002364   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.502630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.001239   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.508168   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.007952   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.508714   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.008805   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.508239   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.009278   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.509811   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.008945   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.513267   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.008127   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.502641   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.002630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.501272   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.001592   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.502177   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.002030   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.501972   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.001917   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.502061   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.001824   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.508106   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.007937   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.507970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.008418   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.508614   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.007994   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.508452   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.008632   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.510343   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.008029   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.501559   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.000819   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.501592   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.002062   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.501962   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.001720   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.502083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.002024   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.501681   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.001502   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.507866   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.009254   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.508704   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.008650   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.508846   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.010798   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.507933   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.009073   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.508337   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.008331   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.502462   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.003975   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.501373   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.002075   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.502437   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.001953   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.501417   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.003083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.501515   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.001553   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.509712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.008196   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.507361   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.008284   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.508302   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.007728   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.509259   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.008711   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.509664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.008507   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.502079   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.001986   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.501922   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.001179   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.502972   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.002165   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.502809   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.001369   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.502083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.002507   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.508264   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.008006   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.509488   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.008519   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.508978   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.008309   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.508775   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.009625   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.508731   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.009043   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.502787   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.001831   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.502430   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.998860   55957 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 04:00:29.998886   55957 kapi.go:107] duration metric: took 6m0.000824832s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 04:00:29.998960   55957 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 04:00:30.000498   55957 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1219 04:00:30.001513   55957 addons.go:546] duration metric: took 6m7.141140342s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1219 04:00:30.001540   55957 start.go:247] waiting for cluster config update ...
	I1219 04:00:30.001550   55957 start.go:256] writing updated cluster config ...
	I1219 04:00:30.001800   55957 ssh_runner.go:195] Run: rm -f paused
	I1219 04:00:30.010656   55957 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:30.015390   55957 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9ptrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.020029   55957 pod_ready.go:94] pod "coredns-66bc5c9577-9ptrv" is "Ready"
	I1219 04:00:30.020051   55957 pod_ready.go:86] duration metric: took 4.638733ms for pod "coredns-66bc5c9577-9ptrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.022246   55957 pod_ready.go:83] waiting for pod "etcd-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.026208   55957 pod_ready.go:94] pod "etcd-embed-certs-244717" is "Ready"
	I1219 04:00:30.026224   55957 pod_ready.go:86] duration metric: took 3.954396ms for pod "etcd-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.028026   55957 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.033934   55957 pod_ready.go:94] pod "kube-apiserver-embed-certs-244717" is "Ready"
	I1219 04:00:30.033951   55957 pod_ready.go:86] duration metric: took 5.905842ms for pod "kube-apiserver-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.036019   55957 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.417680   55957 pod_ready.go:94] pod "kube-controller-manager-embed-certs-244717" is "Ready"
	I1219 04:00:30.417709   55957 pod_ready.go:86] duration metric: took 381.673199ms for pod "kube-controller-manager-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.616122   55957 pod_ready.go:83] waiting for pod "kube-proxy-p8gvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.015548   55957 pod_ready.go:94] pod "kube-proxy-p8gvm" is "Ready"
	I1219 04:00:31.015585   55957 pod_ready.go:86] duration metric: took 399.442531ms for pod "kube-proxy-p8gvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.216107   55957 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.615784   55957 pod_ready.go:94] pod "kube-scheduler-embed-certs-244717" is "Ready"
	I1219 04:00:31.615816   55957 pod_ready.go:86] duration metric: took 399.682179ms for pod "kube-scheduler-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.615832   55957 pod_ready.go:40] duration metric: took 1.605153664s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:31.662639   55957 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 04:00:31.664208   55957 out.go:179] * Done! kubectl is now configured to use "embed-certs-244717" cluster and "default" namespace by default
	I1219 04:00:29.508455   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:30.007925   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:30.507876   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:31.007766   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:31.509691   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:32.008321   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:32.509128   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:33.008511   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:33.509110   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:34.008834   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:34.508661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:35.009145   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:35.510268   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:36.007810   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:36.508457   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:37.008511   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:37.508340   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:38.008906   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:38.508226   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:39.007515   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:39.508398   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:40.008048   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:40.507411   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:41.008044   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:41.509491   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:42.008720   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:42.508893   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:43.008890   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:43.507746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:44.008735   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:44.508515   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:45.008316   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:45.508925   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:46.007410   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:46.507809   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:47.007816   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:47.507934   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:48.008317   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:48.511438   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:49.008355   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:49.508479   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:50.008867   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:50.507492   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:51.008220   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:51.508283   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:52.008800   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:52.508617   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:53.008464   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:53.508878   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:54.008198   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:54.509007   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:55.007919   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:55.507118   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:56.008201   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:56.507989   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:57.007872   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:57.508142   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:58.008008   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:58.504601   56230 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 04:00:58.504633   56230 kapi.go:107] duration metric: took 6m0.000289249s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 04:00:58.504722   56230 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 04:00:58.506261   56230 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1219 04:00:58.507432   56230 addons.go:546] duration metric: took 6m6.536744168s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1219 04:00:58.507471   56230 start.go:247] waiting for cluster config update ...
	I1219 04:00:58.507487   56230 start.go:256] writing updated cluster config ...
	I1219 04:00:58.507818   56230 ssh_runner.go:195] Run: rm -f paused
	I1219 04:00:58.516094   56230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:58.521203   56230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dnfcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.526011   56230 pod_ready.go:94] pod "coredns-66bc5c9577-dnfcc" is "Ready"
	I1219 04:00:58.526035   56230 pod_ready.go:86] duration metric: took 4.809568ms for pod "coredns-66bc5c9577-dnfcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.528592   56230 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.534102   56230 pod_ready.go:94] pod "etcd-default-k8s-diff-port-168174" is "Ready"
	I1219 04:00:58.534119   56230 pod_ready.go:86] duration metric: took 5.507213ms for pod "etcd-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.536078   56230 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.540931   56230 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-168174" is "Ready"
	I1219 04:00:58.540951   56230 pod_ready.go:86] duration metric: took 4.854792ms for pod "kube-apiserver-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.542905   56230 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.920520   56230 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-168174" is "Ready"
	I1219 04:00:58.920546   56230 pod_ready.go:86] duration metric: took 377.623833ms for pod "kube-controller-manager-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:59.120738   56230 pod_ready.go:83] waiting for pod "kube-proxy-zs4wg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:59.520222   56230 pod_ready.go:94] pod "kube-proxy-zs4wg" is "Ready"
	I1219 04:00:59.520254   56230 pod_ready.go:86] duration metric: took 399.487462ms for pod "kube-proxy-zs4wg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:59.721383   56230 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:01:00.120982   56230 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-168174" is "Ready"
	I1219 04:01:00.121009   56230 pod_ready.go:86] duration metric: took 399.598924ms for pod "kube-scheduler-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:01:00.121020   56230 pod_ready.go:40] duration metric: took 1.604899766s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:01:00.167943   56230 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 04:01:00.169437   56230 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-168174" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 04:09:03 no-preload-298059 crio[891]: time="2025-12-19 04:09:03.966866524Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117343966841325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134588,},InodesUsed:&UInt64Value{Value:58,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bad289bb-cdda-42d3-b5b6-635984bdd16f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:09:03 no-preload-298059 crio[891]: time="2025-12-19 04:09:03.968385443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3fbf7594-353f-48bd-99c0-eaf07aac247e name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:03 no-preload-298059 crio[891]: time="2025-12-19 04:09:03.968845928Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3fbf7594-353f-48bd-99c0-eaf07aac247e name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:03 no-preload-298059 crio[891]: time="2025-12-19 04:09:03.969591745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8734051d2f0753c6b39259d3a92400c83923737a12e641958274288c74f737af,PodSandboxId:38a19878c79e8ec4ee1fc22c21f80a89dfab724c1fd54f7467b39d39cd6d7bdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116464727222106,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a1b3f7-48eb-4e67-a38a-d3bbe618037b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5e14a321a32354d05717270a3820978b39b7fe7f272d3e3c60c08337ccfd9,PodSandboxId:060f5ba9ce5e95d49b5a8c5b58923f2516cffe5fac626ab8ffbce5972a9f4769,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-auth,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1,State:CONTAINER_RUNNING,CreatedAt:1766116459128067038,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-auth,io.kubernetes.pod.name: kubernetes-dashboard-auth-776b489b7d-9c8dt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 612deea8-222e-4076-8cf2-abed9ad430c4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d5e3053e,io.kubernetes.container.ports: [{\"name\":\"auth\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e932d1edea4abfa1cf9c9b4526d12d79651379c3515e143486d77e49eeed4013,PodSandboxId:b946b5e8ecf26f58b26959e213b4a54f8741433834484a1fa8df6f88ed4f5487,Metadata:&ContainerMetadata{Name:proxy,Attempt:0,},Image:&ImageSpec{Image:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_RUNNING,CreatedAt:1766116455666220781,Labels:map[string]string{io.kubernetes.container.name: proxy,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-rf7kh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid:
4e4979a6-a706-4379-ba0f-c676c9c6a4ff,},Annotations:map[string]string{io.kubernetes.container.hash: 6a12d770,io.kubernetes.container.ports: [{\"name\":\"proxy-tls\",\"containerPort\":8443,\"protocol\":\"TCP\"},{\"name\":\"status\",\"containerPort\":8100,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"kong\",\"quit\",\"--wait=15\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87834bd45e2f55618472578765bdaba0091e09366c3d3856f7be8c53adbfa311,PodSandboxId:b946b5e8ecf26f58b26959e213b4a54f8741433834484a1fa8df6f88ed4f5487,Metadata:&ContainerMetadata{Name:clear-stale-pid,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909de
c92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_EXITED,CreatedAt:1766116454729152813,Labels:map[string]string{io.kubernetes.container.name: clear-stale-pid,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-rf7kh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4e4979a6-a706-4379-ba0f-c676c9c6a4ff,},Annotations:map[string]string{io.kubernetes.container.hash: be552228,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7279a41bb4eb25f161d2cf59ef5fbe88c781b37a1adb7cd8a4fcacf3d987126,PodSandboxId:5de93babad08b2c4439a2a2fe38807e1166eaab072c41cb06446cb0a4da7629e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116441044355015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d5b4f64-027d-4358-ad35-d6f5cf456210,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e938ed63b36432780e9bda7e4e07175ff2c09116d5c4725046ebe10668ca727f,PodSandboxId:50e1884314e011cbff1646e7c6c28f6a55fbfa203ce35c71a9567cfb9108bae0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9
122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766116437677163723,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-s7729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c378beb2-94a6-4f48-ba17-5753dd076754,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d62270f19610e36a2dd04fae3c125abdfc1dc4e1c9bcbbb0a6a0
1f8b266853,PodSandboxId:9fbf14ecbca67f75a0e65fe75557249fcfb6d01bde3883512cc7cddd6b11c15d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766116433784381128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdfxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fed76d85-6daa-4df3-be09-4e1bbd4df590,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128355a3fc0dfc8009b429e4b715eec612771201a2c8acbd9a1d0b4f3c42f4ec,PodSandboxId:38a19878c
79e8ec4ee1fc22c21f80a89dfab724c1fd54f7467b39d39cd6d7bdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116433813622034,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a1b3f7-48eb-4e67-a38a-d3bbe618037b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34dccc447cd0495f72344c9747492811bad3cac08571f5ac5007ab21829ad24,PodSandboxId:e96045d4e198f9ec8d3436
a8c528095444f91681e7a5883ecbaaef3e20e76484,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766116430388219293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2499cee63550b4d080ca800fbd48c085,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Con
tainer{Id:b51c72efa2e6b21453d50ea9f3ceff413d2c418492349ac14070ae143117c31d,PodSandboxId:b2319b12b46c44269d5d1ea6bf004c84cd9af7bde9558476debb783c8fa12b2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766116430334824312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc0917cc026e406690dfc10d9d69272,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a2e1b518b3668336e07cdc6313b22edced2b5e71bd2bc8ba608cba56cb22b9,PodSandboxId:725ddd7dbbfad94b7f03aa8c2af02f77f8bfc2f5056cccf1aa1eaf4843012f90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766116430268168535,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 004aeae4dee57f603ea153e6cf9b1d25,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b745ee728165c7698ece1a961e1c00b51f262eb8ceb5b937040ef09debc9b4b,PodSandboxId:53091c4b851e66b844ff4b3207e698a142b5777cdd2c82fece770c99f9bbec41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766116430223814162,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa215cbe4eb125782b57f5aacf122c1,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.con
tainer.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3fbf7594-353f-48bd-99c0-eaf07aac247e name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.006151194Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a244c17-c98e-4a6c-bc12-530ff6aef267 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.006250790Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a244c17-c98e-4a6c-bc12-530ff6aef267 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.007378164Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68525f68-b526-448e-84f4-3c3325297a56 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.007855760Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117344007832710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134588,},InodesUsed:&UInt64Value{Value:58,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68525f68-b526-448e-84f4-3c3325297a56 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.008581111Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b24cef4b-f662-4ceb-a620-a78023843df5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.008673677Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b24cef4b-f662-4ceb-a620-a78023843df5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.008987925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8734051d2f0753c6b39259d3a92400c83923737a12e641958274288c74f737af,PodSandboxId:38a19878c79e8ec4ee1fc22c21f80a89dfab724c1fd54f7467b39d39cd6d7bdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116464727222106,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a1b3f7-48eb-4e67-a38a-d3bbe618037b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5e14a321a32354d05717270a3820978b39b7fe7f272d3e3c60c08337ccfd9,PodSandboxId:060f5ba9ce5e95d49b5a8c5b58923f2516cffe5fac626ab8ffbce5972a9f4769,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-auth,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1,State:CONTAINER_RUNNING,CreatedAt:1766116459128067038,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-auth,io.kubernetes.pod.name: kubernetes-dashboard-auth-776b489b7d-9c8dt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 612deea8-222e-4076-8cf2-abed9ad430c4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d5e3053e,io.kubernetes.container.ports: [{\"name\":\"auth\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e932d1edea4abfa1cf9c9b4526d12d79651379c3515e143486d77e49eeed4013,PodSandboxId:b946b5e8ecf26f58b26959e213b4a54f8741433834484a1fa8df6f88ed4f5487,Metadata:&ContainerMetadata{Name:proxy,Attempt:0,},Image:&ImageSpec{Image:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_RUNNING,CreatedAt:1766116455666220781,Labels:map[string]string{io.kubernetes.container.name: proxy,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-rf7kh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid:
4e4979a6-a706-4379-ba0f-c676c9c6a4ff,},Annotations:map[string]string{io.kubernetes.container.hash: 6a12d770,io.kubernetes.container.ports: [{\"name\":\"proxy-tls\",\"containerPort\":8443,\"protocol\":\"TCP\"},{\"name\":\"status\",\"containerPort\":8100,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"kong\",\"quit\",\"--wait=15\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87834bd45e2f55618472578765bdaba0091e09366c3d3856f7be8c53adbfa311,PodSandboxId:b946b5e8ecf26f58b26959e213b4a54f8741433834484a1fa8df6f88ed4f5487,Metadata:&ContainerMetadata{Name:clear-stale-pid,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909de
c92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_EXITED,CreatedAt:1766116454729152813,Labels:map[string]string{io.kubernetes.container.name: clear-stale-pid,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-rf7kh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4e4979a6-a706-4379-ba0f-c676c9c6a4ff,},Annotations:map[string]string{io.kubernetes.container.hash: be552228,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7279a41bb4eb25f161d2cf59ef5fbe88c781b37a1adb7cd8a4fcacf3d987126,PodSandboxId:5de93babad08b2c4439a2a2fe38807e1166eaab072c41cb06446cb0a4da7629e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116441044355015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d5b4f64-027d-4358-ad35-d6f5cf456210,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e938ed63b36432780e9bda7e4e07175ff2c09116d5c4725046ebe10668ca727f,PodSandboxId:50e1884314e011cbff1646e7c6c28f6a55fbfa203ce35c71a9567cfb9108bae0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9
122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766116437677163723,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-s7729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c378beb2-94a6-4f48-ba17-5753dd076754,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d62270f19610e36a2dd04fae3c125abdfc1dc4e1c9bcbbb0a6a0
1f8b266853,PodSandboxId:9fbf14ecbca67f75a0e65fe75557249fcfb6d01bde3883512cc7cddd6b11c15d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766116433784381128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdfxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fed76d85-6daa-4df3-be09-4e1bbd4df590,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128355a3fc0dfc8009b429e4b715eec612771201a2c8acbd9a1d0b4f3c42f4ec,PodSandboxId:38a19878c
79e8ec4ee1fc22c21f80a89dfab724c1fd54f7467b39d39cd6d7bdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116433813622034,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a1b3f7-48eb-4e67-a38a-d3bbe618037b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34dccc447cd0495f72344c9747492811bad3cac08571f5ac5007ab21829ad24,PodSandboxId:e96045d4e198f9ec8d3436
a8c528095444f91681e7a5883ecbaaef3e20e76484,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766116430388219293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2499cee63550b4d080ca800fbd48c085,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Con
tainer{Id:b51c72efa2e6b21453d50ea9f3ceff413d2c418492349ac14070ae143117c31d,PodSandboxId:b2319b12b46c44269d5d1ea6bf004c84cd9af7bde9558476debb783c8fa12b2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766116430334824312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc0917cc026e406690dfc10d9d69272,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a2e1b518b3668336e07cdc6313b22edced2b5e71bd2bc8ba608cba56cb22b9,PodSandboxId:725ddd7dbbfad94b7f03aa8c2af02f77f8bfc2f5056cccf1aa1eaf4843012f90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766116430268168535,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 004aeae4dee57f603ea153e6cf9b1d25,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b745ee728165c7698ece1a961e1c00b51f262eb8ceb5b937040ef09debc9b4b,PodSandboxId:53091c4b851e66b844ff4b3207e698a142b5777cdd2c82fece770c99f9bbec41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766116430223814162,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa215cbe4eb125782b57f5aacf122c1,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.con
tainer.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b24cef4b-f662-4ceb-a620-a78023843df5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.040258581Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=17a56057-c50a-4e5f-b372-96418986ed71 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.040744176Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=17a56057-c50a-4e5f-b372-96418986ed71 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.042207131Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78ae1217-6193-48cb-9697-6eab4f7bf76f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.042706753Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117344042684189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134588,},InodesUsed:&UInt64Value{Value:58,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78ae1217-6193-48cb-9697-6eab4f7bf76f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.043576575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f36e518-e157-467a-9e9b-0214501975fb name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.043685374Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f36e518-e157-467a-9e9b-0214501975fb name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.044017653Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8734051d2f0753c6b39259d3a92400c83923737a12e641958274288c74f737af,PodSandboxId:38a19878c79e8ec4ee1fc22c21f80a89dfab724c1fd54f7467b39d39cd6d7bdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116464727222106,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a1b3f7-48eb-4e67-a38a-d3bbe618037b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5e14a321a32354d05717270a3820978b39b7fe7f272d3e3c60c08337ccfd9,PodSandboxId:060f5ba9ce5e95d49b5a8c5b58923f2516cffe5fac626ab8ffbce5972a9f4769,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-auth,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1,State:CONTAINER_RUNNING,CreatedAt:1766116459128067038,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-auth,io.kubernetes.pod.name: kubernetes-dashboard-auth-776b489b7d-9c8dt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 612deea8-222e-4076-8cf2-abed9ad430c4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d5e3053e,io.kubernetes.container.ports: [{\"name\":\"auth\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e932d1edea4abfa1cf9c9b4526d12d79651379c3515e143486d77e49eeed4013,PodSandboxId:b946b5e8ecf26f58b26959e213b4a54f8741433834484a1fa8df6f88ed4f5487,Metadata:&ContainerMetadata{Name:proxy,Attempt:0,},Image:&ImageSpec{Image:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_RUNNING,CreatedAt:1766116455666220781,Labels:map[string]string{io.kubernetes.container.name: proxy,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-rf7kh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid:
4e4979a6-a706-4379-ba0f-c676c9c6a4ff,},Annotations:map[string]string{io.kubernetes.container.hash: 6a12d770,io.kubernetes.container.ports: [{\"name\":\"proxy-tls\",\"containerPort\":8443,\"protocol\":\"TCP\"},{\"name\":\"status\",\"containerPort\":8100,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"kong\",\"quit\",\"--wait=15\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87834bd45e2f55618472578765bdaba0091e09366c3d3856f7be8c53adbfa311,PodSandboxId:b946b5e8ecf26f58b26959e213b4a54f8741433834484a1fa8df6f88ed4f5487,Metadata:&ContainerMetadata{Name:clear-stale-pid,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909de
c92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_EXITED,CreatedAt:1766116454729152813,Labels:map[string]string{io.kubernetes.container.name: clear-stale-pid,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-rf7kh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4e4979a6-a706-4379-ba0f-c676c9c6a4ff,},Annotations:map[string]string{io.kubernetes.container.hash: be552228,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7279a41bb4eb25f161d2cf59ef5fbe88c781b37a1adb7cd8a4fcacf3d987126,PodSandboxId:5de93babad08b2c4439a2a2fe38807e1166eaab072c41cb06446cb0a4da7629e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116441044355015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d5b4f64-027d-4358-ad35-d6f5cf456210,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e938ed63b36432780e9bda7e4e07175ff2c09116d5c4725046ebe10668ca727f,PodSandboxId:50e1884314e011cbff1646e7c6c28f6a55fbfa203ce35c71a9567cfb9108bae0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9
122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766116437677163723,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-s7729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c378beb2-94a6-4f48-ba17-5753dd076754,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d62270f19610e36a2dd04fae3c125abdfc1dc4e1c9bcbbb0a6a0
1f8b266853,PodSandboxId:9fbf14ecbca67f75a0e65fe75557249fcfb6d01bde3883512cc7cddd6b11c15d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766116433784381128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdfxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fed76d85-6daa-4df3-be09-4e1bbd4df590,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128355a3fc0dfc8009b429e4b715eec612771201a2c8acbd9a1d0b4f3c42f4ec,PodSandboxId:38a19878c
79e8ec4ee1fc22c21f80a89dfab724c1fd54f7467b39d39cd6d7bdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116433813622034,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a1b3f7-48eb-4e67-a38a-d3bbe618037b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34dccc447cd0495f72344c9747492811bad3cac08571f5ac5007ab21829ad24,PodSandboxId:e96045d4e198f9ec8d3436
a8c528095444f91681e7a5883ecbaaef3e20e76484,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766116430388219293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2499cee63550b4d080ca800fbd48c085,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Con
tainer{Id:b51c72efa2e6b21453d50ea9f3ceff413d2c418492349ac14070ae143117c31d,PodSandboxId:b2319b12b46c44269d5d1ea6bf004c84cd9af7bde9558476debb783c8fa12b2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766116430334824312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc0917cc026e406690dfc10d9d69272,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a2e1b518b3668336e07cdc6313b22edced2b5e71bd2bc8ba608cba56cb22b9,PodSandboxId:725ddd7dbbfad94b7f03aa8c2af02f77f8bfc2f5056cccf1aa1eaf4843012f90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766116430268168535,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 004aeae4dee57f603ea153e6cf9b1d25,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b745ee728165c7698ece1a961e1c00b51f262eb8ceb5b937040ef09debc9b4b,PodSandboxId:53091c4b851e66b844ff4b3207e698a142b5777cdd2c82fece770c99f9bbec41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766116430223814162,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa215cbe4eb125782b57f5aacf122c1,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.con
tainer.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f36e518-e157-467a-9e9b-0214501975fb name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.079736106Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=95c6da1e-a6e7-471f-a791-5d4f737d5a02 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.080960243Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95c6da1e-a6e7-471f-a791-5d4f737d5a02 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.082831784Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3811715c-47a2-4391-b937-44d95856c2fc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.083707911Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117344083682739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134588,},InodesUsed:&UInt64Value{Value:58,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3811715c-47a2-4391-b937-44d95856c2fc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.084939224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60469fcf-452d-417a-a5d7-05ff9104e617 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.084994604Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60469fcf-452d-417a-a5d7-05ff9104e617 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:04 no-preload-298059 crio[891]: time="2025-12-19 04:09:04.085263026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8734051d2f0753c6b39259d3a92400c83923737a12e641958274288c74f737af,PodSandboxId:38a19878c79e8ec4ee1fc22c21f80a89dfab724c1fd54f7467b39d39cd6d7bdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116464727222106,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a1b3f7-48eb-4e67-a38a-d3bbe618037b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5e14a321a32354d05717270a3820978b39b7fe7f272d3e3c60c08337ccfd9,PodSandboxId:060f5ba9ce5e95d49b5a8c5b58923f2516cffe5fac626ab8ffbce5972a9f4769,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-auth,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1,State:CONTAINER_RUNNING,CreatedAt:1766116459128067038,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-auth,io.kubernetes.pod.name: kubernetes-dashboard-auth-776b489b7d-9c8dt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 612deea8-222e-4076-8cf2-abed9ad430c4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d5e3053e,io.kubernetes.container.ports: [{\"name\":\"auth\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e932d1edea4abfa1cf9c9b4526d12d79651379c3515e143486d77e49eeed4013,PodSandboxId:b946b5e8ecf26f58b26959e213b4a54f8741433834484a1fa8df6f88ed4f5487,Metadata:&ContainerMetadata{Name:proxy,Attempt:0,},Image:&ImageSpec{Image:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_RUNNING,CreatedAt:1766116455666220781,Labels:map[string]string{io.kubernetes.container.name: proxy,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-rf7kh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid:
4e4979a6-a706-4379-ba0f-c676c9c6a4ff,},Annotations:map[string]string{io.kubernetes.container.hash: 6a12d770,io.kubernetes.container.ports: [{\"name\":\"proxy-tls\",\"containerPort\":8443,\"protocol\":\"TCP\"},{\"name\":\"status\",\"containerPort\":8100,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"kong\",\"quit\",\"--wait=15\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87834bd45e2f55618472578765bdaba0091e09366c3d3856f7be8c53adbfa311,PodSandboxId:b946b5e8ecf26f58b26959e213b4a54f8741433834484a1fa8df6f88ed4f5487,Metadata:&ContainerMetadata{Name:clear-stale-pid,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909de
c92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_EXITED,CreatedAt:1766116454729152813,Labels:map[string]string{io.kubernetes.container.name: clear-stale-pid,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-rf7kh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4e4979a6-a706-4379-ba0f-c676c9c6a4ff,},Annotations:map[string]string{io.kubernetes.container.hash: be552228,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7279a41bb4eb25f161d2cf59ef5fbe88c781b37a1adb7cd8a4fcacf3d987126,PodSandboxId:5de93babad08b2c4439a2a2fe38807e1166eaab072c41cb06446cb0a4da7629e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116441044355015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d5b4f64-027d-4358-ad35-d6f5cf456210,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e938ed63b36432780e9bda7e4e07175ff2c09116d5c4725046ebe10668ca727f,PodSandboxId:50e1884314e011cbff1646e7c6c28f6a55fbfa203ce35c71a9567cfb9108bae0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9
122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766116437677163723,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-s7729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c378beb2-94a6-4f48-ba17-5753dd076754,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d62270f19610e36a2dd04fae3c125abdfc1dc4e1c9bcbbb0a6a0
1f8b266853,PodSandboxId:9fbf14ecbca67f75a0e65fe75557249fcfb6d01bde3883512cc7cddd6b11c15d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766116433784381128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdfxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fed76d85-6daa-4df3-be09-4e1bbd4df590,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128355a3fc0dfc8009b429e4b715eec612771201a2c8acbd9a1d0b4f3c42f4ec,PodSandboxId:38a19878c
79e8ec4ee1fc22c21f80a89dfab724c1fd54f7467b39d39cd6d7bdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116433813622034,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a1b3f7-48eb-4e67-a38a-d3bbe618037b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34dccc447cd0495f72344c9747492811bad3cac08571f5ac5007ab21829ad24,PodSandboxId:e96045d4e198f9ec8d3436
a8c528095444f91681e7a5883ecbaaef3e20e76484,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766116430388219293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2499cee63550b4d080ca800fbd48c085,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Con
tainer{Id:b51c72efa2e6b21453d50ea9f3ceff413d2c418492349ac14070ae143117c31d,PodSandboxId:b2319b12b46c44269d5d1ea6bf004c84cd9af7bde9558476debb783c8fa12b2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766116430334824312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc0917cc026e406690dfc10d9d69272,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a2e1b518b3668336e07cdc6313b22edced2b5e71bd2bc8ba608cba56cb22b9,PodSandboxId:725ddd7dbbfad94b7f03aa8c2af02f77f8bfc2f5056cccf1aa1eaf4843012f90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766116430268168535,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 004aeae4dee57f603ea153e6cf9b1d25,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b745ee728165c7698ece1a961e1c00b51f262eb8ceb5b937040ef09debc9b4b,PodSandboxId:53091c4b851e66b844ff4b3207e698a142b5777cdd2c82fece770c99f9bbec41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766116430223814162,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa215cbe4eb125782b57f5aacf122c1,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.con
tainer.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60469fcf-452d-417a-a5d7-05ff9104e617 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                           CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8734051d2f075       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                14 minutes ago      Running             storage-provisioner         3                   38a19878c79e8       storage-provisioner                          kube-system
	43c5e14a321a3       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052   14 minutes ago      Running             kubernetes-dashboard-auth   0                   060f5ba9ce5e9       kubernetes-dashboard-auth-776b489b7d-9c8dt   kubernetes-dashboard
	e932d1edea4ab       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                14 minutes ago      Running             proxy                       0                   b946b5e8ecf26       kubernetes-dashboard-kong-78b7499b45-rf7kh   kubernetes-dashboard
	87834bd45e2f5       docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29                  14 minutes ago      Exited              clear-stale-pid             0                   b946b5e8ecf26       kubernetes-dashboard-kong-78b7499b45-rf7kh   kubernetes-dashboard
	f7279a41bb4eb       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e             15 minutes ago      Running             busybox                     1                   5de93babad08b       busybox                                      default
	e938ed63b3643       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                15 minutes ago      Running             coredns                     1                   50e1884314e01       coredns-7d764666f9-s7729                     kube-system
	128355a3fc0df       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                15 minutes ago      Exited              storage-provisioner         2                   38a19878c79e8       storage-provisioner                          kube-system
	43d62270f1961       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                                15 minutes ago      Running             kube-proxy                  1                   9fbf14ecbca67       kube-proxy-mdfxl                             kube-system
	a34dccc447cd0       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                                15 minutes ago      Running             kube-scheduler              1                   e96045d4e198f       kube-scheduler-no-preload-298059             kube-system
	b51c72efa2e6b       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                15 minutes ago      Running             etcd                        1                   b2319b12b46c4       etcd-no-preload-298059                       kube-system
	74a2e1b518b36       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                                15 minutes ago      Running             kube-apiserver              1                   725ddd7dbbfad       kube-apiserver-no-preload-298059             kube-system
	8b745ee728165       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                                15 minutes ago      Running             kube-controller-manager     1                   53091c4b851e6       kube-controller-manager-no-preload-298059    kube-system
	
	
	==> coredns [e938ed63b36432780e9bda7e4e07175ff2c09116d5c4725046ebe10668ca727f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47550 - 54271 "HINFO IN 3726524411623469454.6949907490803346276. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026035699s
	
	
	==> describe nodes <==
	Name:               no-preload-298059
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-298059
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=no-preload-298059
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_51_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:50:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-298059
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 04:09:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 04:08:10 +0000   Fri, 19 Dec 2025 03:50:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 04:08:10 +0000   Fri, 19 Dec 2025 03:50:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 04:08:10 +0000   Fri, 19 Dec 2025 03:50:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 04:08:10 +0000   Fri, 19 Dec 2025 03:54:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.137
	  Hostname:    no-preload-298059
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 2d818f99fc714bf3ba2eba438495ffd9
	  System UUID:                2d818f99-fc71-4bf3-ba2e-ba438495ffd9
	  Boot ID:                    25d4d3fe-f38b-40d9-8b85-a42971ad642c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-7d764666f9-s7729                                 100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     17m
	  kube-system                 etcd-no-preload-298059                                   100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         18m
	  kube-system                 kube-apiserver-no-preload-298059                         250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-no-preload-298059                200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-mdfxl                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-no-preload-298059                         100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 metrics-server-5d785b57d4-fkthx                          100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         17m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kubernetes-dashboard        kubernetes-dashboard-api-7646d845d9-scngx                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    15m
	  kubernetes-dashboard        kubernetes-dashboard-auth-776b489b7d-9c8dt               100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    15m
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-rf7kh               0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq    100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    15m
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-pnj4g                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1250m (62%)   1 (50%)
	  memory             1170Mi (39%)  1770Mi (59%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  18m   node-controller  Node no-preload-298059 event: Registered Node no-preload-298059 in Controller
	  Normal  RegisteredNode  15m   node-controller  Node no-preload-298059 event: Registered Node no-preload-298059 in Controller
	
	
	==> dmesg <==
	[Dec19 03:53] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001397] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005601] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.795242] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.107735] kauditd_printk_skb: 88 callbacks suppressed
	[  +4.663916] kauditd_printk_skb: 196 callbacks suppressed
	[Dec19 03:54] kauditd_printk_skb: 275 callbacks suppressed
	[ +11.265172] kauditd_printk_skb: 204 callbacks suppressed
	[  +9.990359] kauditd_printk_skb: 47 callbacks suppressed
	[Dec19 03:55] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [b51c72efa2e6b21453d50ea9f3ceff413d2c418492349ac14070ae143117c31d] <==
	{"level":"warn","ts":"2025-12-19T03:54:14.476175Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"757.170235ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:54:14.476197Z","caller":"traceutil/trace.go:172","msg":"trace[176152680] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:790; }","duration":"757.197764ms","start":"2025-12-19T03:54:13.718992Z","end":"2025-12-19T03:54:14.476190Z","steps":["trace[176152680] 'agreement among raft nodes before linearized reading'  (duration: 742.89228ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:14.476344Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"459.995416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:54:14.476374Z","caller":"traceutil/trace.go:172","msg":"trace[739941376] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:791; }","duration":"460.026293ms","start":"2025-12-19T03:54:14.016340Z","end":"2025-12-19T03:54:14.476366Z","steps":["trace[739941376] 'agreement among raft nodes before linearized reading'  (duration: 459.957782ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:14.476396Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T03:54:14.016325Z","time spent":"460.064299ms","remote":"127.0.0.1:53478","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-12-19T03:54:14.476489Z","caller":"traceutil/trace.go:172","msg":"trace[116140840] transaction","detail":"{read_only:false; response_revision:791; number_of_response:1; }","duration":"1.428734442s","start":"2025-12-19T03:54:13.047747Z","end":"2025-12-19T03:54:14.476481Z","steps":["trace[116140840] 'process raft request'  (duration: 1.414113802s)","trace[116140840] 'compare'  (duration: 14.323645ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:54:14.476541Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T03:54:13.047725Z","time spent":"1.428779193s","remote":"127.0.0.1:53612","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":556,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-298059\" mod_revision:740 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-298059\" value_size:498 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-298059\" > >"}
	{"level":"warn","ts":"2025-12-19T03:54:16.134151Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.815426ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:54:16.134371Z","caller":"traceutil/trace.go:172","msg":"trace[959220433] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:801; }","duration":"118.042887ms","start":"2025-12-19T03:54:16.016308Z","end":"2025-12-19T03:54:16.134351Z","steps":["trace[959220433] 'range keys from in-memory index tree'  (duration: 117.669225ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:54:16.882132Z","caller":"traceutil/trace.go:172","msg":"trace[726864670] linearizableReadLoop","detail":"{readStateIndex:854; appliedIndex:854; }","duration":"164.772241ms","start":"2025-12-19T03:54:16.717343Z","end":"2025-12-19T03:54:16.882115Z","steps":["trace[726864670] 'read index received'  (duration: 164.764805ms)","trace[726864670] 'applied index is now lower than readState.Index'  (duration: 6.593µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:54:16.882278Z","caller":"traceutil/trace.go:172","msg":"trace[662213994] transaction","detail":"{read_only:false; response_revision:802; number_of_response:1; }","duration":"212.405398ms","start":"2025-12-19T03:54:16.669861Z","end":"2025-12-19T03:54:16.882266Z","steps":["trace[662213994] 'process raft request'  (duration: 212.291616ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:16.882622Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.283903ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:54:16.883576Z","caller":"traceutil/trace.go:172","msg":"trace[1583234260] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:802; }","duration":"166.246921ms","start":"2025-12-19T03:54:16.717315Z","end":"2025-12-19T03:54:16.883562Z","steps":["trace[1583234260] 'agreement among raft nodes before linearized reading'  (duration: 165.210679ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:17.342523Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.319454ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:54:17.342655Z","caller":"traceutil/trace.go:172","msg":"trace[1531808117] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:802; }","duration":"130.458084ms","start":"2025-12-19T03:54:17.212181Z","end":"2025-12-19T03:54:17.342639Z","steps":["trace[1531808117] 'range keys from in-memory index tree'  (duration: 130.261043ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:17.343397Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"325.240632ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:54:17.343463Z","caller":"traceutil/trace.go:172","msg":"trace[1475071058] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:802; }","duration":"325.309685ms","start":"2025-12-19T03:54:17.018143Z","end":"2025-12-19T03:54:17.343452Z","steps":["trace[1475071058] 'range keys from in-memory index tree'  (duration: 325.153137ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:17.343512Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T03:54:17.018124Z","time spent":"325.377588ms","remote":"127.0.0.1:53478","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-12-19T03:54:45.761692Z","caller":"traceutil/trace.go:172","msg":"trace[1645681447] transaction","detail":"{read_only:false; response_revision:833; number_of_response:1; }","duration":"121.358147ms","start":"2025-12-19T03:54:45.640315Z","end":"2025-12-19T03:54:45.761673Z","steps":["trace[1645681447] 'process raft request'  (duration: 121.257897ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T04:03:51.194318Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1099}
	{"level":"info","ts":"2025-12-19T04:03:51.259191Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1099,"took":"64.43722ms","hash":1653275827,"current-db-size-bytes":4591616,"current-db-size":"4.6 MB","current-db-size-in-use-bytes":1994752,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-12-19T04:03:51.259284Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1653275827,"revision":1099,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T04:08:51.202246Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1489}
	{"level":"info","ts":"2025-12-19T04:08:51.208421Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1489,"took":"5.286414ms","hash":1861443370,"current-db-size-bytes":4591616,"current-db-size":"4.6 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-12-19T04:08:51.208468Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1861443370,"revision":1489,"compact-revision":1099}
	
	
	==> kernel <==
	 04:09:04 up 15 min,  0 users,  load average: 0.26, 0.38, 0.27
	Linux no-preload-298059 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [74a2e1b518b3668336e07cdc6313b22edced2b5e71bd2bc8ba608cba56cb22b9] <==
	E1219 04:04:53.571088       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 04:04:53.571100       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:06:53.571149       1 handler_proxy.go:99] no RequestInfo found in the context
	W1219 04:06:53.571202       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:06:53.571234       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 04:06:53.571251       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1219 04:06:53.571241       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 04:06:53.572419       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:08:52.576463       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:08:52.576557       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1219 04:08:53.577093       1 handler_proxy.go:99] no RequestInfo found in the context
	W1219 04:08:53.577162       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:08:53.577513       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 04:08:53.577525       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1219 04:08:53.577848       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 04:08:53.579059       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8b745ee728165c7698ece1a961e1c00b51f262eb8ceb5b937040ef09debc9b4b] <==
	I1219 04:02:57.546004       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:03:27.259504       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:03:27.555857       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:03:57.264706       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:03:57.568359       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:04:27.270094       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:04:27.576385       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:04:57.275710       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:04:57.585896       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:05:27.280855       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:05:27.597231       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:05:57.286037       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:05:57.605655       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:06:27.291321       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:06:27.614434       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:06:57.299201       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:06:57.624436       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:07:27.309086       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:07:27.635234       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:07:57.313999       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:07:57.646271       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:08:27.319066       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:08:27.656637       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:08:57.325326       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:08:57.669656       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [43d62270f19610e36a2dd04fae3c125abdfc1dc4e1c9bcbbb0a6a01f8b266853] <==
	I1219 03:53:54.182308       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:53:54.283281       1 shared_informer.go:377] "Caches are synced"
	I1219 03:53:54.283329       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.137"]
	E1219 03:53:54.283428       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:53:54.374256       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:53:54.374327       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:53:54.374350       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:53:54.393486       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:53:54.394107       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:53:54.394135       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:53:54.406028       1 config.go:200] "Starting service config controller"
	I1219 03:53:54.406555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:53:54.406726       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:53:54.407288       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:53:54.407323       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:53:54.407330       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:53:54.408429       1 config.go:309] "Starting node config controller"
	I1219 03:53:54.408441       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:53:54.408450       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:53:54.511565       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:53:54.513892       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:53:54.511881       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a34dccc447cd0495f72344c9747492811bad3cac08571f5ac5007ab21829ad24] <==
	I1219 03:53:50.978728       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:53:52.478810       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:53:52.479833       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:53:52.479852       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:53:52.479858       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:53:52.566597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 03:53:52.566650       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:53:52.591525       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:53:52.592098       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:53:52.594133       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:53:52.592118       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1219 03:53:52.627223       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1219 03:53:54.194990       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 19 04:08:35 no-preload-298059 kubelet[1788]: E1219 04:08:35.296793    1788 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 04:08:35 no-preload-298059 kubelet[1788]: E1219 04:08:35.298978    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30 in docker.io/kubernetesui/dashboard-web: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-pnj4g" podUID="903dae27-c404-4849-a890-b0b9347710fa"
	Dec 19 04:08:35 no-preload-298059 kubelet[1788]: E1219 04:08:35.299845    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq" podUID="af2a0a73-cbd2-4724-8d28-578fb9abddbe"
	Dec 19 04:08:39 no-preload-298059 kubelet[1788]: E1219 04:08:39.534220    1788 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117319533905761  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:134588}  inodes_used:{value:58}}"
	Dec 19 04:08:39 no-preload-298059 kubelet[1788]: E1219 04:08:39.534240    1788 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117319533905761  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:134588}  inodes_used:{value:58}}"
	Dec 19 04:08:42 no-preload-298059 kubelet[1788]: E1219 04:08:42.298517    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: reading manifest 1.14.0 in docker.io/kubernetesui/dashboard-api: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-7646d845d9-scngx" podUID="4f806eec-0e2a-4b2c-8ab4-df0bc3208141"
	Dec 19 04:08:43 no-preload-298059 kubelet[1788]: E1219 04:08:43.296413    1788 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-fkthx" containerName="metrics-server"
	Dec 19 04:08:43 no-preload-298059 kubelet[1788]: E1219 04:08:43.298257    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-fkthx" podUID="cd519bcc-8634-4a06-8174-bc1d8114f895"
	Dec 19 04:08:44 no-preload-298059 kubelet[1788]: E1219 04:08:44.295610    1788 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-rf7kh" containerName="proxy"
	Dec 19 04:08:47 no-preload-298059 kubelet[1788]: E1219 04:08:47.299320    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30 in docker.io/kubernetesui/dashboard-web: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-pnj4g" podUID="903dae27-c404-4849-a890-b0b9347710fa"
	Dec 19 04:08:49 no-preload-298059 kubelet[1788]: E1219 04:08:49.296904    1788 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 04:08:49 no-preload-298059 kubelet[1788]: E1219 04:08:49.298338    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq" podUID="af2a0a73-cbd2-4724-8d28-578fb9abddbe"
	Dec 19 04:08:49 no-preload-298059 kubelet[1788]: E1219 04:08:49.537127    1788 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117329536461610  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:134588}  inodes_used:{value:58}}"
	Dec 19 04:08:49 no-preload-298059 kubelet[1788]: E1219 04:08:49.537166    1788 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117329536461610  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:134588}  inodes_used:{value:58}}"
	Dec 19 04:08:57 no-preload-298059 kubelet[1788]: E1219 04:08:57.299154    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: reading manifest 1.14.0 in docker.io/kubernetesui/dashboard-api: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-7646d845d9-scngx" podUID="4f806eec-0e2a-4b2c-8ab4-df0bc3208141"
	Dec 19 04:08:58 no-preload-298059 kubelet[1788]: E1219 04:08:58.296243    1788 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-fkthx" containerName="metrics-server"
	Dec 19 04:08:58 no-preload-298059 kubelet[1788]: E1219 04:08:58.304113    1788 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 04:08:58 no-preload-298059 kubelet[1788]: E1219 04:08:58.304162    1788 kuberuntime_image.go:43] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 04:08:58 no-preload-298059 kubelet[1788]: E1219 04:08:58.304481    1788 kuberuntime_manager.go:1664] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-5d785b57d4-fkthx_kube-system(cd519bcc-8634-4a06-8174-bc1d8114f895): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Dec 19 04:08:58 no-preload-298059 kubelet[1788]: E1219 04:08:58.304512    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-fkthx" podUID="cd519bcc-8634-4a06-8174-bc1d8114f895"
	Dec 19 04:08:59 no-preload-298059 kubelet[1788]: E1219 04:08:59.539856    1788 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117339538551655  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:134588}  inodes_used:{value:58}}"
	Dec 19 04:08:59 no-preload-298059 kubelet[1788]: E1219 04:08:59.539882    1788 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117339538551655  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:134588}  inodes_used:{value:58}}"
	Dec 19 04:09:01 no-preload-298059 kubelet[1788]: E1219 04:09:01.301702    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30 in docker.io/kubernetesui/dashboard-web: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-pnj4g" podUID="903dae27-c404-4849-a890-b0b9347710fa"
	Dec 19 04:09:04 no-preload-298059 kubelet[1788]: E1219 04:09:04.296471    1788 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 04:09:04 no-preload-298059 kubelet[1788]: E1219 04:09:04.302655    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq" podUID="af2a0a73-cbd2-4724-8d28-578fb9abddbe"
	
	
	==> kubernetes-dashboard [43c5e14a321a32354d05717270a3820978b39b7fe7f272d3e3c60c08337ccfd9] <==
	I1219 03:54:19.436094       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:54:19.436315       1 init.go:49] Using in-cluster config
	I1219 03:54:19.436572       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [128355a3fc0dfc8009b429e4b715eec612771201a2c8acbd9a1d0b4f3c42f4ec] <==
	I1219 03:53:54.031073       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:54:24.037738       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8734051d2f0753c6b39259d3a92400c83923737a12e641958274288c74f737af] <==
	W1219 04:08:38.636143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:40.640828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:40.645603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:42.649103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:42.653899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:44.657537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:44.662158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:46.666307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:46.675101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:48.678497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:48.684619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:50.689095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:50.693815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:52.696199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:52.701501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:54.704648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:54.712814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:56.716244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:56.722185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:58.725614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:08:58.731276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:00.735535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:00.741094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:02.744357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:02.750587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-298059 -n no-preload-298059
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-298059 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-fkthx kubernetes-dashboard-api-7646d845d9-scngx kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq kubernetes-dashboard-web-7f7574785f-pnj4g
helpers_test.go:283: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context no-preload-298059 describe pod metrics-server-5d785b57d4-fkthx kubernetes-dashboard-api-7646d845d9-scngx kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq kubernetes-dashboard-web-7f7574785f-pnj4g
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context no-preload-298059 describe pod metrics-server-5d785b57d4-fkthx kubernetes-dashboard-api-7646d845d9-scngx kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq kubernetes-dashboard-web-7f7574785f-pnj4g: exit status 1 (64.104647ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-fkthx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-api-7646d845d9-scngx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-web-7f7574785f-pnj4g" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context no-preload-298059 describe pod metrics-server-5d785b57d4-fkthx kubernetes-dashboard-api-7646d845d9-scngx kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq kubernetes-dashboard-web-7f7574785f-pnj4g: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 04:00:45.207788    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:00:51.407001    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:00:52.620118    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-244717 -n embed-certs-244717
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-12-19 04:09:32.148427515 +0000 UTC m=+6275.186606822
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-244717 -n embed-certs-244717
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-244717 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-244717 logs -n 25: (1.216930193s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ ssh     │ -p bridge-542624 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo containerd config dump                                                                                                                                                                                                │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo crio config                                                                                                                                                                                                           │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ delete  │ -p bridge-542624                                                                                                                                                                                                                            │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ delete  │ -p disable-driver-mounts-189846                                                                                                                                                                                                             │ disable-driver-mounts-189846 │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ start   │ -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-094166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ stop    │ -p old-k8s-version-094166 --alsologtostderr -v=3                                                                                                                                                                                            │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p no-preload-298059 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ stop    │ -p no-preload-298059 --alsologtostderr -v=3                                                                                                                                                                                                 │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p embed-certs-244717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                    │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:52 UTC │
	│ stop    │ -p embed-certs-244717 --alsologtostderr -v=3                                                                                                                                                                                                │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-168174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                          │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:52 UTC │
	│ stop    │ -p default-k8s-diff-port-168174 --alsologtostderr -v=3                                                                                                                                                                                      │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-094166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                           │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p old-k8s-version-094166 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable dashboard -p no-preload-298059 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p no-preload-298059 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-244717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p embed-certs-244717 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-168174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                     │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ start   │ -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 04:01 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:54:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:54:19.163618   56230 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:54:19.163755   56230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:54:19.163766   56230 out.go:374] Setting ErrFile to fd 2...
	I1219 03:54:19.163773   56230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:54:19.164086   56230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:54:19.164710   56230 out.go:368] Setting JSON to false
	I1219 03:54:19.166058   56230 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5803,"bootTime":1766110656,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:54:19.166138   56230 start.go:143] virtualization: kvm guest
	I1219 03:54:19.167819   56230 out.go:179] * [default-k8s-diff-port-168174] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:54:19.168806   56230 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:54:19.168798   56230 notify.go:221] Checking for updates...
	I1219 03:54:19.170649   56230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:54:19.171718   56230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:19.172800   56230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 03:54:19.173680   56230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:54:19.174607   56230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:54:19.176155   56230 config.go:182] Loaded profile config "default-k8s-diff-port-168174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:19.176843   56230 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:54:19.221795   56230 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 03:54:19.222673   56230 start.go:309] selected driver: kvm2
	I1219 03:54:19.222686   56230 start.go:928] validating driver "kvm2" against &{Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:19.222787   56230 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:54:19.223700   56230 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:54:19.223731   56230 cni.go:84] Creating CNI manager for ""
	I1219 03:54:19.223785   56230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:19.223821   56230 start.go:353] cluster config:
	{Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:19.223901   56230 iso.go:125] acquiring lock: {Name:mk42290af04a74a4dddf27fa33aac85c9bdccfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:54:19.225058   56230 out.go:179] * Starting "default-k8s-diff-port-168174" primary control-plane node in "default-k8s-diff-port-168174" cluster
	I1219 03:54:19.225891   56230 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:54:19.225925   56230 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:54:19.225937   56230 cache.go:65] Caching tarball of preloaded images
	I1219 03:54:19.226014   56230 preload.go:238] Found /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:54:19.226025   56230 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:54:19.226103   56230 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/config.json ...
	I1219 03:54:19.226379   56230 start.go:360] acquireMachinesLock for default-k8s-diff-port-168174: {Name:mk229398d29442d4d52885bbac963e5004bfbfba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:54:19.226434   56230 start.go:364] duration metric: took 34.138µs to acquireMachinesLock for "default-k8s-diff-port-168174"
	I1219 03:54:19.226446   56230 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:54:19.226451   56230 fix.go:54] fixHost starting: 
	I1219 03:54:19.228163   56230 fix.go:112] recreateIfNeeded on default-k8s-diff-port-168174: state=Stopped err=<nil>
	W1219 03:54:19.228180   56230 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:54:16.533332   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:17.359209   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:17.532886   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:18.033640   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:18.533499   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:19.033373   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:19.533624   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:20.033318   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:20.532932   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:21.032204   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:18.384127   55957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:18.420807   55957 api_server.go:72] duration metric: took 1.537508247s to wait for apiserver process to appear ...
	I1219 03:54:18.420840   55957 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:18.420862   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.071318   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:21.071349   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:21.071368   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.151121   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:21.151151   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:21.421632   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.426745   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:21.426773   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:21.921398   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.927340   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:21.927368   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:22.420988   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:22.428236   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 200:
	ok
	I1219 03:54:22.439161   55957 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:22.439190   55957 api_server.go:131] duration metric: took 4.018341977s to wait for apiserver health ...
	I1219 03:54:22.439202   55957 cni.go:84] Creating CNI manager for ""
	I1219 03:54:22.439211   55957 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:22.440712   55957 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:54:22.442679   55957 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:54:22.464908   55957 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:54:22.524765   55957 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:22.531030   55957 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:22.531082   55957 system_pods.go:61] "coredns-66bc5c9577-9ptrv" [22226444-faa6-420d-a862-1ef0441a80e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:22.531096   55957 system_pods.go:61] "etcd-embed-certs-244717" [24fc3b7f-cbce-4591-9d18-9859136e38c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:22.531109   55957 system_pods.go:61] "kube-apiserver-embed-certs-244717" [67d6bc2f-88f4-449a-a029-dc66d6ad1468] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:22.531117   55957 system_pods.go:61] "kube-controller-manager-embed-certs-244717" [f2be3fb7-4440-43a3-96ec-b2b241393a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:22.531126   55957 system_pods.go:61] "kube-proxy-p8gvm" [283607b2-9e6c-44f4-9c9d-7d713c71fb8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:22.531135   55957 system_pods.go:61] "kube-scheduler-embed-certs-244717" [243ff9cd-448c-4e21-98aa-032de9f329fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:22.531151   55957 system_pods.go:61] "metrics-server-746fcd58dc-x74d4" [e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:22.531159   55957 system_pods.go:61] "storage-provisioner" [99ff9c60-2f30-457a-8cb5-e030eb64a58e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:54:22.531169   55957 system_pods.go:74] duration metric: took 6.378453ms to wait for pod list to return data ...
	I1219 03:54:22.531184   55957 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:22.538334   55957 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:22.538361   55957 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:22.538378   55957 node_conditions.go:105] duration metric: took 7.188571ms to run NodePressure ...
	I1219 03:54:22.538434   55957 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:22.838171   55957 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1219 03:54:22.841979   55957 kubeadm.go:744] kubelet initialised
	I1219 03:54:22.842009   55957 kubeadm.go:745] duration metric: took 3.812738ms waiting for restarted kubelet to initialise ...
	I1219 03:54:22.842027   55957 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:54:22.858280   55957 ops.go:34] apiserver oom_adj: -16
	I1219 03:54:22.858296   55957 kubeadm.go:602] duration metric: took 8.274282939s to restartPrimaryControlPlane
	I1219 03:54:22.858304   55957 kubeadm.go:403] duration metric: took 8.332738451s to StartCluster
	I1219 03:54:22.858319   55957 settings.go:142] acquiring lock: {Name:mkb131693a40b9aa50c302192272518c3561c861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:22.858398   55957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:22.860091   55957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:22.860306   55957 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.54 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:54:22.860397   55957 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:54:22.860520   55957 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-244717"
	I1219 03:54:22.860540   55957 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-244717"
	W1219 03:54:22.860553   55957 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:54:22.860556   55957 addons.go:70] Setting default-storageclass=true in profile "embed-certs-244717"
	I1219 03:54:22.860588   55957 config.go:182] Loaded profile config "embed-certs-244717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:22.860638   55957 addons.go:70] Setting dashboard=true in profile "embed-certs-244717"
	I1219 03:54:22.860664   55957 addons.go:239] Setting addon dashboard=true in "embed-certs-244717"
	W1219 03:54:22.860674   55957 addons.go:248] addon dashboard should already be in state true
	I1219 03:54:22.860596   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.860698   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.860603   55957 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-244717"
	I1219 03:54:22.860613   55957 addons.go:70] Setting metrics-server=true in profile "embed-certs-244717"
	I1219 03:54:22.861202   55957 addons.go:239] Setting addon metrics-server=true in "embed-certs-244717"
	W1219 03:54:22.861219   55957 addons.go:248] addon metrics-server should already be in state true
	I1219 03:54:22.861243   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.861875   55957 out.go:179] * Verifying Kubernetes components...
	I1219 03:54:22.862820   55957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:22.863427   55957 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:22.863444   55957 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:54:22.864891   55957 addons.go:239] Setting addon default-storageclass=true in "embed-certs-244717"
	W1219 03:54:22.864914   55957 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:54:22.864935   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.866702   55957 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:22.866730   55957 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:54:22.866703   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.866913   55957 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:54:22.867359   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.867391   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.867616   55957 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:54:22.867638   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:22.868328   55957 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:22.868344   55957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:54:22.868968   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:54:22.869019   55957 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:54:22.870937   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.871717   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.871748   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.871986   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:22.872790   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873111   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873212   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.873235   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873423   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:22.873635   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.873666   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873832   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:23.104462   55957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:23.139781   55957 node_ready.go:35] waiting up to 6m0s for node "embed-certs-244717" to be "Ready" ...
	I1219 03:54:19.229464   56230 out.go:252] * Restarting existing kvm2 VM for "default-k8s-diff-port-168174" ...
	I1219 03:54:19.229501   56230 main.go:144] libmachine: starting domain...
	I1219 03:54:19.229509   56230 main.go:144] libmachine: ensuring networks are active...
	I1219 03:54:19.230233   56230 main.go:144] libmachine: Ensuring network default is active
	I1219 03:54:19.230721   56230 main.go:144] libmachine: Ensuring network mk-default-k8s-diff-port-168174 is active
	I1219 03:54:19.231248   56230 main.go:144] libmachine: getting domain XML...
	I1219 03:54:19.232369   56230 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>default-k8s-diff-port-168174</name>
	  <uuid>5503b0a8-1398-475d-b625-563c5bc2d168</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/default-k8s-diff-port-168174.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:d9:97:a2'/>
	      <source network='mk-default-k8s-diff-port-168174'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:3f:9e:c8'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 03:54:20.662520   56230 main.go:144] libmachine: waiting for domain to start...
	I1219 03:54:20.663943   56230 main.go:144] libmachine: domain is now running
	I1219 03:54:20.663969   56230 main.go:144] libmachine: waiting for IP...
	I1219 03:54:20.664770   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.665467   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has current primary IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.665481   56230 main.go:144] libmachine: found domain IP: 192.168.50.68
	I1219 03:54:20.665486   56230 main.go:144] libmachine: reserving static IP address...
	I1219 03:54:20.665943   56230 main.go:144] libmachine: found host DHCP lease matching {name: "default-k8s-diff-port-168174", mac: "52:54:00:d9:97:a2", ip: "192.168.50.68"} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:51:35 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:20.665989   56230 main.go:144] libmachine: skip adding static IP to network mk-default-k8s-diff-port-168174 - found existing host DHCP lease matching {name: "default-k8s-diff-port-168174", mac: "52:54:00:d9:97:a2", ip: "192.168.50.68"}
	I1219 03:54:20.666003   56230 main.go:144] libmachine: reserved static IP address 192.168.50.68 for domain default-k8s-diff-port-168174
	I1219 03:54:20.666019   56230 main.go:144] libmachine: waiting for SSH...
	I1219 03:54:20.666027   56230 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 03:54:20.668799   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.669225   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:51:35 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:20.669267   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.669495   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:20.669789   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:20.669805   56230 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 03:54:23.725788   56230 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.50.68:22: connect: no route to host
	I1219 03:54:21.532614   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:22.032788   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:22.532959   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:23.032773   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:23.531977   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:24.033500   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:24.532177   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:25.033441   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:25.533482   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:26.031758   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:23.198551   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:23.404667   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:23.420466   55957 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:54:23.445604   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:54:23.445631   55957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:54:23.525300   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:54:23.525326   55957 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:54:23.593759   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:23.593784   55957 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:54:23.645141   55957 node_ready.go:49] node "embed-certs-244717" is "Ready"
	I1219 03:54:23.645171   55957 node_ready.go:38] duration metric: took 505.352434ms for node "embed-certs-244717" to be "Ready" ...
	I1219 03:54:23.645183   55957 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:54:23.645241   55957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:23.652800   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:24.781529   55957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.376827148s)
	I1219 03:54:24.781591   55957 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.361072264s)
	I1219 03:54:24.781616   55957 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.136359787s)
	I1219 03:54:24.781638   55957 api_server.go:72] duration metric: took 1.9213054s to wait for apiserver process to appear ...
	I1219 03:54:24.781645   55957 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:24.781662   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:24.781671   55957 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:54:24.791019   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 200:
	ok
	I1219 03:54:24.791945   55957 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:24.791970   55957 api_server.go:131] duration metric: took 10.31791ms to wait for apiserver health ...
	I1219 03:54:24.791980   55957 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:24.795539   55957 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:24.795599   55957 system_pods.go:61] "coredns-66bc5c9577-9ptrv" [22226444-faa6-420d-a862-1ef0441a80e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:24.795612   55957 system_pods.go:61] "etcd-embed-certs-244717" [24fc3b7f-cbce-4591-9d18-9859136e38c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:24.795627   55957 system_pods.go:61] "kube-apiserver-embed-certs-244717" [67d6bc2f-88f4-449a-a029-dc66d6ad1468] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:24.795638   55957 system_pods.go:61] "kube-controller-manager-embed-certs-244717" [f2be3fb7-4440-43a3-96ec-b2b241393a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:24.795644   55957 system_pods.go:61] "kube-proxy-p8gvm" [283607b2-9e6c-44f4-9c9d-7d713c71fb8c] Running
	I1219 03:54:24.795655   55957 system_pods.go:61] "kube-scheduler-embed-certs-244717" [243ff9cd-448c-4e21-98aa-032de9f329fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:24.795666   55957 system_pods.go:61] "metrics-server-746fcd58dc-x74d4" [e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:24.795671   55957 system_pods.go:61] "storage-provisioner" [99ff9c60-2f30-457a-8cb5-e030eb64a58e] Running
	I1219 03:54:24.795683   55957 system_pods.go:74] duration metric: took 3.696303ms to wait for pod list to return data ...
	I1219 03:54:24.795694   55957 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:54:24.797860   55957 default_sa.go:45] found service account: "default"
	I1219 03:54:24.797884   55957 default_sa.go:55] duration metric: took 2.181869ms for default service account to be created ...
	I1219 03:54:24.797895   55957 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:54:24.800212   55957 system_pods.go:86] 8 kube-system pods found
	I1219 03:54:24.800242   55957 system_pods.go:89] "coredns-66bc5c9577-9ptrv" [22226444-faa6-420d-a862-1ef0441a80e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:24.800255   55957 system_pods.go:89] "etcd-embed-certs-244717" [24fc3b7f-cbce-4591-9d18-9859136e38c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:24.800267   55957 system_pods.go:89] "kube-apiserver-embed-certs-244717" [67d6bc2f-88f4-449a-a029-dc66d6ad1468] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:24.800277   55957 system_pods.go:89] "kube-controller-manager-embed-certs-244717" [f2be3fb7-4440-43a3-96ec-b2b241393a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:24.800283   55957 system_pods.go:89] "kube-proxy-p8gvm" [283607b2-9e6c-44f4-9c9d-7d713c71fb8c] Running
	I1219 03:54:24.800291   55957 system_pods.go:89] "kube-scheduler-embed-certs-244717" [243ff9cd-448c-4e21-98aa-032de9f329fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:24.800300   55957 system_pods.go:89] "metrics-server-746fcd58dc-x74d4" [e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:24.800307   55957 system_pods.go:89] "storage-provisioner" [99ff9c60-2f30-457a-8cb5-e030eb64a58e] Running
	I1219 03:54:24.800317   55957 system_pods.go:126] duration metric: took 2.415918ms to wait for k8s-apps to be running ...
	I1219 03:54:24.800326   55957 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:54:24.800389   55957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:54:24.901954   55957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.249113047s)
	I1219 03:54:24.901997   55957 addons.go:500] Verifying addon metrics-server=true in "embed-certs-244717"
	I1219 03:54:24.902043   55957 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:54:24.902053   55957 system_svc.go:56] duration metric: took 101.72157ms WaitForService to wait for kubelet
	I1219 03:54:24.902083   55957 kubeadm.go:587] duration metric: took 2.041739112s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:54:24.902106   55957 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:24.912597   55957 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:24.912623   55957 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:24.912638   55957 node_conditions.go:105] duration metric: took 10.525951ms to run NodePressure ...
	I1219 03:54:24.912652   55957 start.go:242] waiting for startup goroutines ...
	I1219 03:54:25.801998   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:54:29.507152   55957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.70510669s)
	I1219 03:54:29.507259   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:29.992247   55957 addons.go:500] Verifying addon dashboard=true in "embed-certs-244717"
	I1219 03:54:29.995517   55957 out.go:179] * Verifying dashboard addon...
	I1219 03:54:26.531479   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:27.031454   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:27.532215   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:28.032964   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:28.532268   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.032253   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.533154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:30.032379   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:30.532853   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:31.032643   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.998065   55957 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:54:30.003541   55957 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:54:30.003561   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:30.510371   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:31.003319   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:31.502854   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.002809   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.503083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.001709   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.805953   56230 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.50.68:22: connect: no route to host
	I1219 03:54:32.806901   56230 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.50.68:22: connect: connection refused
	I1219 03:54:31.531396   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.033946   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.532063   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.033088   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.532601   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.032154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.532734   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.031403   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.532231   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:36.031798   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.501421   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.001823   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.501944   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.001242   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.502033   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:36.001834   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:36.503279   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.002832   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.501859   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.002218   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.914133   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:54:35.917629   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.918062   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:35.918084   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.918331   56230 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/config.json ...
	I1219 03:54:35.918603   56230 machine.go:94] provisionDockerMachine start ...
	I1219 03:54:35.921009   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.921341   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:35.921380   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.921581   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:35.921797   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:35.921810   56230 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:54:36.027619   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 03:54:36.027644   56230 buildroot.go:166] provisioning hostname "default-k8s-diff-port-168174"
	I1219 03:54:36.030973   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.031540   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.031597   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.031855   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:36.032105   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:36.032121   56230 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-168174 && echo "default-k8s-diff-port-168174" | sudo tee /etc/hostname
	I1219 03:54:36.154920   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-168174
	
	I1219 03:54:36.157818   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.158270   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.158298   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.158481   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:36.158705   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:36.158721   56230 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-168174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-168174/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-168174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:54:36.278763   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:54:36.278793   56230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5010/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5010/.minikube}
	I1219 03:54:36.278815   56230 buildroot.go:174] setting up certificates
	I1219 03:54:36.278825   56230 provision.go:84] configureAuth start
	I1219 03:54:36.282034   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.282595   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.282631   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.285039   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.285396   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.285421   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.285558   56230 provision.go:143] copyHostCerts
	I1219 03:54:36.285634   56230 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem, removing ...
	I1219 03:54:36.285655   56230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem
	I1219 03:54:36.285732   56230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem (1082 bytes)
	I1219 03:54:36.285873   56230 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem, removing ...
	I1219 03:54:36.285889   56230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem
	I1219 03:54:36.285939   56230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem (1123 bytes)
	I1219 03:54:36.286034   56230 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem, removing ...
	I1219 03:54:36.286044   56230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem
	I1219 03:54:36.286086   56230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem (1679 bytes)
	I1219 03:54:36.286187   56230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-168174 san=[127.0.0.1 192.168.50.68 default-k8s-diff-port-168174 localhost minikube]
	I1219 03:54:36.425832   56230 provision.go:177] copyRemoteCerts
	I1219 03:54:36.425892   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:54:36.428255   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.428656   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.428686   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.428839   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:36.519020   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 03:54:36.558591   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:54:36.592448   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:54:36.618754   56230 provision.go:87] duration metric: took 339.918165ms to configureAuth
	I1219 03:54:36.618782   56230 buildroot.go:189] setting minikube options for container-runtime
	I1219 03:54:36.618965   56230 config.go:182] Loaded profile config "default-k8s-diff-port-168174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:36.622080   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.622643   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.622690   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.622932   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:36.623146   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:36.623170   56230 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:54:36.870072   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:54:36.870099   56230 machine.go:97] duration metric: took 951.477635ms to provisionDockerMachine
	I1219 03:54:36.870113   56230 start.go:293] postStartSetup for "default-k8s-diff-port-168174" (driver="kvm2")
	I1219 03:54:36.870125   56230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:54:36.870211   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:54:36.873360   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.873824   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.873854   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.873997   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:36.957455   56230 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:54:36.962098   56230 info.go:137] Remote host: Buildroot 2025.02
	I1219 03:54:36.962123   56230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/addons for local assets ...
	I1219 03:54:36.962187   56230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/files for local assets ...
	I1219 03:54:36.962258   56230 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem -> 89372.pem in /etc/ssl/certs
	I1219 03:54:36.962365   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:54:36.973208   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /etc/ssl/certs/89372.pem (1708 bytes)
	I1219 03:54:37.001535   56230 start.go:296] duration metric: took 131.409863ms for postStartSetup
	I1219 03:54:37.001590   56230 fix.go:56] duration metric: took 17.775113489s for fixHost
	I1219 03:54:37.004880   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.005287   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.005312   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.005528   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:37.005820   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:37.005839   56230 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 03:54:37.113597   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766116477.079572846
	
	I1219 03:54:37.113621   56230 fix.go:216] guest clock: 1766116477.079572846
	I1219 03:54:37.113630   56230 fix.go:229] Guest: 2025-12-19 03:54:37.079572846 +0000 UTC Remote: 2025-12-19 03:54:37.001596336 +0000 UTC m=+17.891500693 (delta=77.97651ms)
	I1219 03:54:37.113645   56230 fix.go:200] guest clock delta is within tolerance: 77.97651ms
	I1219 03:54:37.113651   56230 start.go:83] releasing machines lock for "default-k8s-diff-port-168174", held for 17.887209269s
	I1219 03:54:37.116322   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.116867   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.116898   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.117549   56230 ssh_runner.go:195] Run: cat /version.json
	I1219 03:54:37.117645   56230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:54:37.121299   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.121532   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.121841   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.121885   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.122114   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:37.122168   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.122203   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.122439   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:37.200188   56230 ssh_runner.go:195] Run: systemctl --version
	I1219 03:54:37.236006   56230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:54:37.382400   56230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:54:37.391093   56230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:54:37.391172   56230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:54:37.412549   56230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:54:37.412595   56230 start.go:496] detecting cgroup driver to use...
	I1219 03:54:37.412701   56230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:54:37.432292   56230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:54:37.448705   56230 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:54:37.448757   56230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:54:37.464885   56230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:54:37.488524   56230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:54:37.648374   56230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:54:37.863271   56230 docker.go:234] disabling docker service ...
	I1219 03:54:37.863333   56230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:54:37.880285   56230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:54:37.895631   56230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:54:38.053642   56230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:54:38.210829   56230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:54:38.227130   56230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:54:38.248699   56230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:54:38.248763   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.260875   56230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1219 03:54:38.260939   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.273032   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.284839   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.296706   56230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:54:38.309100   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.320373   56230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.343213   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.355251   56230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:54:38.366693   56230 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 03:54:38.366745   56230 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 03:54:38.386325   56230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:54:38.397641   56230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:38.542778   56230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:54:38.656266   56230 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:54:38.656354   56230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:54:38.662225   56230 start.go:564] Will wait 60s for crictl version
	I1219 03:54:38.662286   56230 ssh_runner.go:195] Run: which crictl
	I1219 03:54:38.666072   56230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 03:54:38.702242   56230 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1219 03:54:38.702324   56230 ssh_runner.go:195] Run: crio --version
	I1219 03:54:38.730733   56230 ssh_runner.go:195] Run: crio --version
	I1219 03:54:38.760806   56230 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1219 03:54:38.764622   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:38.765017   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:38.765041   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:38.765207   56230 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1219 03:54:38.769555   56230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:54:38.784218   56230 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:54:38.784318   56230 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:54:38.784389   56230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:38.817654   56230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1219 03:54:38.817721   56230 ssh_runner.go:195] Run: which lz4
	I1219 03:54:38.821795   56230 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1219 03:54:38.826295   56230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1219 03:54:38.826327   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
	I1219 03:54:36.531538   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.031367   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.531677   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.031134   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.532312   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.032552   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.532678   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.031267   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.531858   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:41.032379   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.502453   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.002949   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.502152   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.002580   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.501440   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:41.002612   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:41.501822   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.002247   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.502196   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.002641   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.045060   56230 crio.go:462] duration metric: took 1.223302426s to copy over tarball
	I1219 03:54:40.045121   56230 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1219 03:54:41.702628   56230 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.657483082s)
	I1219 03:54:41.702653   56230 crio.go:469] duration metric: took 1.657571319s to extract the tarball
	I1219 03:54:41.702661   56230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1219 03:54:41.742396   56230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:41.778250   56230 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:54:41.778274   56230 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:54:41.778281   56230 kubeadm.go:935] updating node { 192.168.50.68 8444 v1.34.3 crio true true} ...
	I1219 03:54:41.778393   56230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-168174 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:54:41.778466   56230 ssh_runner.go:195] Run: crio config
	I1219 03:54:41.824084   56230 cni.go:84] Creating CNI manager for ""
	I1219 03:54:41.824114   56230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:41.824134   56230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:54:41.824161   56230 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.68 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-168174 NodeName:default-k8s-diff-port-168174 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:54:41.824332   56230 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.68
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-168174"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.68"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.68"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:54:41.824436   56230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:54:41.838181   56230 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:54:41.838263   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:54:41.850122   56230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1219 03:54:41.871647   56230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:54:41.891031   56230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1219 03:54:41.910970   56230 ssh_runner.go:195] Run: grep 192.168.50.68	control-plane.minikube.internal$ /etc/hosts
	I1219 03:54:41.915265   56230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.68	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:54:41.929042   56230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:42.077837   56230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:42.111492   56230 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174 for IP: 192.168.50.68
	I1219 03:54:42.111515   56230 certs.go:195] generating shared ca certs ...
	I1219 03:54:42.111529   56230 certs.go:227] acquiring lock for ca certs: {Name:mk433b742ffd55233764aebe3c6f298e0d1f02ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:42.111713   56230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key
	I1219 03:54:42.111782   56230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key
	I1219 03:54:42.111804   56230 certs.go:257] generating profile certs ...
	I1219 03:54:42.111942   56230 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/client.key
	I1219 03:54:42.112027   56230 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/apiserver.key.ed8a7a08
	I1219 03:54:42.112078   56230 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/proxy-client.key
	I1219 03:54:42.112201   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem (1338 bytes)
	W1219 03:54:42.112240   56230 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937_empty.pem, impossibly tiny 0 bytes
	I1219 03:54:42.112252   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:54:42.112280   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem (1082 bytes)
	I1219 03:54:42.112309   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:54:42.112361   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem (1679 bytes)
	I1219 03:54:42.112423   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem (1708 bytes)
	I1219 03:54:42.113420   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:54:42.154291   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:54:42.194006   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:54:42.221732   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:54:42.253007   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:54:42.280935   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1219 03:54:42.315083   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:54:42.342426   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:54:42.371444   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:54:42.402350   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem --> /usr/share/ca-certificates/8937.pem (1338 bytes)
	I1219 03:54:42.430533   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /usr/share/ca-certificates/89372.pem (1708 bytes)
	I1219 03:54:42.462798   56230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:54:42.483977   56230 ssh_runner.go:195] Run: openssl version
	I1219 03:54:42.490839   56230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.503565   56230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89372.pem /etc/ssl/certs/89372.pem
	I1219 03:54:42.514852   56230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.520693   56230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:46 /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.520739   56230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.528108   56230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:54:42.539720   56230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89372.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:54:42.550915   56230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.561679   56230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:54:42.572526   56230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.577725   56230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.577781   56230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.584786   56230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:54:42.596115   56230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:54:42.607332   56230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.618682   56230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8937.pem /etc/ssl/certs/8937.pem
	I1219 03:54:42.630292   56230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.635409   56230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:46 /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.635452   56230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.642710   56230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:54:42.654104   56230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8937.pem /etc/ssl/certs/51391683.0
	I1219 03:54:42.666207   56230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:54:42.671385   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:54:42.678373   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:54:42.685534   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:54:42.692140   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:54:42.698549   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:54:42.705279   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:54:42.712285   56230 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:42.712383   56230 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:54:42.712433   56230 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:54:42.745951   56230 cri.go:92] found id: ""
	I1219 03:54:42.746000   56230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:54:42.757185   56230 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:54:42.757201   56230 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:54:42.757240   56230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:54:42.768155   56230 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:54:42.769156   56230 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-168174" does not appear in /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:42.769826   56230 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-5010/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-168174" cluster setting kubeconfig missing "default-k8s-diff-port-168174" context setting]
	I1219 03:54:42.770666   56230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:42.772207   56230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:54:42.782776   56230 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.50.68
	I1219 03:54:42.782799   56230 kubeadm.go:1161] stopping kube-system containers ...
	I1219 03:54:42.782811   56230 cri.go:57] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1219 03:54:42.782853   56230 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:54:42.827373   56230 cri.go:92] found id: ""
	I1219 03:54:42.827451   56230 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1219 03:54:42.855644   56230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:54:42.867640   56230 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:54:42.867664   56230 kubeadm.go:158] found existing configuration files:
	
	I1219 03:54:42.867713   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1219 03:54:42.879242   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:54:42.879345   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:54:42.890737   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1219 03:54:42.900979   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:54:42.901033   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:54:42.911989   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1219 03:54:42.922081   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:54:42.922121   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:54:42.933197   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1219 03:54:42.943650   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:54:42.943706   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:54:42.954819   56230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:54:42.965503   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:43.022499   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:41.533216   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.031785   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.531762   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.032044   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.531965   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.032348   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.532701   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.032707   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.531729   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:46.032031   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.502226   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.002160   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.502401   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.002719   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.502332   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:46.001536   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:46.501990   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.002547   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.501403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:48.002631   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.652743   56230 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.630210852s)
	I1219 03:54:44.652817   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:44.912221   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:44.996004   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:45.067644   56230 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:54:45.067725   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:45.568080   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:46.068722   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:46.568114   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:47.068013   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:47.117129   56230 api_server.go:72] duration metric: took 2.049494189s to wait for apiserver process to appear ...
	I1219 03:54:47.117153   56230 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:47.117174   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:47.117680   56230 api_server.go:269] stopped: https://192.168.50.68:8444/healthz: Get "https://192.168.50.68:8444/healthz": dial tcp 192.168.50.68:8444: connect: connection refused
	I1219 03:54:47.617323   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:46.534635   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.031616   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.531182   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:48.032359   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:48.532986   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.031214   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.532385   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.032130   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.532478   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.031638   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.988621   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:49.988647   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:49.988661   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:50.015383   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:50.015404   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:50.117699   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:50.129872   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:50.129895   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:50.617488   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:50.622220   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:50.622255   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:51.117929   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:51.126710   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:51.126741   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:51.617345   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:51.622349   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 200:
	ok
	I1219 03:54:51.628913   56230 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:51.628947   56230 api_server.go:131] duration metric: took 4.511785965s to wait for apiserver health ...
	I1219 03:54:51.628957   56230 cni.go:84] Creating CNI manager for ""
	I1219 03:54:51.628965   56230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:51.630494   56230 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:54:51.631426   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:54:51.647385   56230 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:54:51.669320   56230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:51.675232   56230 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:51.675273   56230 system_pods.go:61] "coredns-66bc5c9577-dnfcc" [b8ee66a7-b129-4499-aad8-a988ecea241c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:51.675288   56230 system_pods.go:61] "etcd-default-k8s-diff-port-168174" [af7e1768-95b9-431e-877a-63138a91dcdc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:51.675298   56230 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-168174" [a59f79b0-b99b-4092-8b4a-773ff0a04569] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:51.675318   56230 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-168174" [405651c9-cdc7-414f-a512-f11b7b580eb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:51.675328   56230 system_pods.go:61] "kube-proxy-zs4wg" [2212782c-32ab-4355-8dda-9117953b0223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:51.675338   56230 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-168174" [a524e706-e546-4aa4-848c-f52eb802ffb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:51.675347   56230 system_pods.go:61] "metrics-server-746fcd58dc-xjkbx" [c6e2f2b2-7b94-4ff2-85ba-e79d72b30655] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:51.675358   56230 system_pods.go:61] "storage-provisioner" [ec1d1888-a950-48d5-9b73-440e7556818b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:54:51.675366   56230 system_pods.go:74] duration metric: took 6.023523ms to wait for pod list to return data ...
	I1219 03:54:51.675387   56230 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:51.680456   56230 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:51.680483   56230 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:51.680500   56230 node_conditions.go:105] duration metric: took 5.106096ms to run NodePressure ...
	I1219 03:54:51.680558   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:51.941503   56230 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1219 03:54:51.945528   56230 kubeadm.go:744] kubelet initialised
	I1219 03:54:51.945566   56230 kubeadm.go:745] duration metric: took 4.028139ms waiting for restarted kubelet to initialise ...
	I1219 03:54:51.945597   56230 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:54:51.967660   56230 ops.go:34] apiserver oom_adj: -16
	I1219 03:54:51.967680   56230 kubeadm.go:602] duration metric: took 9.210474475s to restartPrimaryControlPlane
	I1219 03:54:51.967689   56230 kubeadm.go:403] duration metric: took 9.255411647s to StartCluster
	I1219 03:54:51.967705   56230 settings.go:142] acquiring lock: {Name:mkb131693a40b9aa50c302192272518c3561c861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:51.967787   56230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:51.970216   56230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:51.970558   56230 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:54:51.970693   56230 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:54:51.970789   56230 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970812   56230 config.go:182] Loaded profile config "default-k8s-diff-port-168174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:51.970826   56230 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-168174"
	I1219 03:54:51.970825   56230 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970846   56230 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970858   56230 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-168174"
	I1219 03:54:51.970858   56230 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970884   56230 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-168174"
	W1219 03:54:51.970893   56230 addons.go:248] addon dashboard should already be in state true
	I1219 03:54:51.970919   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	W1219 03:54:51.970836   56230 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:54:51.970978   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	I1219 03:54:51.970861   56230 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-168174"
	W1219 03:54:51.971035   56230 addons.go:248] addon metrics-server should already be in state true
	I1219 03:54:51.971057   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	I1219 03:54:51.971960   56230 out.go:179] * Verifying Kubernetes components...
	I1219 03:54:51.973008   56230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:51.974650   56230 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:51.974726   56230 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:54:51.974952   56230 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:54:51.975006   56230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:54:48.502712   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.001711   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.501592   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.001601   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.501313   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.002296   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.502360   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.002651   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.503108   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.002165   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.975433   56230 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-168174"
	W1219 03:54:51.975454   56230 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:54:51.975493   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	I1219 03:54:51.975992   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:54:51.976010   56230 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:54:51.976037   56230 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:51.976049   56230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:54:51.978029   56230 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:51.978047   56230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:54:51.979030   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.979580   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.979617   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.979992   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:51.980624   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.980627   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981054   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.981088   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981091   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.981123   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981299   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:51.981430   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981442   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:51.981908   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.981931   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.982118   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:52.329267   56230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:52.362110   56230 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-168174" to be "Ready" ...
	I1219 03:54:52.365712   56230 node_ready.go:49] node "default-k8s-diff-port-168174" is "Ready"
	I1219 03:54:52.365740   56230 node_ready.go:38] duration metric: took 3.595186ms for node "default-k8s-diff-port-168174" to be "Ready" ...
	I1219 03:54:52.365758   56230 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:54:52.365821   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:52.390728   56230 api_server.go:72] duration metric: took 420.108978ms to wait for apiserver process to appear ...
	I1219 03:54:52.390759   56230 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:52.390781   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:52.397481   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 200:
	ok
	I1219 03:54:52.398595   56230 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:52.398619   56230 api_server.go:131] duration metric: took 7.851716ms to wait for apiserver health ...
	I1219 03:54:52.398634   56230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:52.403556   56230 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:52.403621   56230 system_pods.go:61] "coredns-66bc5c9577-dnfcc" [b8ee66a7-b129-4499-aad8-a988ecea241c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:52.403638   56230 system_pods.go:61] "etcd-default-k8s-diff-port-168174" [af7e1768-95b9-431e-877a-63138a91dcdc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:52.403653   56230 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-168174" [a59f79b0-b99b-4092-8b4a-773ff0a04569] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:52.403664   56230 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-168174" [405651c9-cdc7-414f-a512-f11b7b580eb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:52.403676   56230 system_pods.go:61] "kube-proxy-zs4wg" [2212782c-32ab-4355-8dda-9117953b0223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:52.403690   56230 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-168174" [a524e706-e546-4aa4-848c-f52eb802ffb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:52.403705   56230 system_pods.go:61] "metrics-server-746fcd58dc-xjkbx" [c6e2f2b2-7b94-4ff2-85ba-e79d72b30655] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:52.403714   56230 system_pods.go:61] "storage-provisioner" [ec1d1888-a950-48d5-9b73-440e7556818b] Running
	I1219 03:54:52.403725   56230 system_pods.go:74] duration metric: took 5.080532ms to wait for pod list to return data ...
	I1219 03:54:52.403737   56230 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:54:52.406964   56230 default_sa.go:45] found service account: "default"
	I1219 03:54:52.406989   56230 default_sa.go:55] duration metric: took 3.241415ms for default service account to be created ...
	I1219 03:54:52.406999   56230 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:54:52.412763   56230 system_pods.go:86] 8 kube-system pods found
	I1219 03:54:52.412787   56230 system_pods.go:89] "coredns-66bc5c9577-dnfcc" [b8ee66a7-b129-4499-aad8-a988ecea241c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:52.412797   56230 system_pods.go:89] "etcd-default-k8s-diff-port-168174" [af7e1768-95b9-431e-877a-63138a91dcdc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:52.412804   56230 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-168174" [a59f79b0-b99b-4092-8b4a-773ff0a04569] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:52.412810   56230 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-168174" [405651c9-cdc7-414f-a512-f11b7b580eb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:52.412816   56230 system_pods.go:89] "kube-proxy-zs4wg" [2212782c-32ab-4355-8dda-9117953b0223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:52.412821   56230 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-168174" [a524e706-e546-4aa4-848c-f52eb802ffb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:52.412826   56230 system_pods.go:89] "metrics-server-746fcd58dc-xjkbx" [c6e2f2b2-7b94-4ff2-85ba-e79d72b30655] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:52.412830   56230 system_pods.go:89] "storage-provisioner" [ec1d1888-a950-48d5-9b73-440e7556818b] Running
	I1219 03:54:52.412837   56230 system_pods.go:126] duration metric: took 5.832618ms to wait for k8s-apps to be running ...
	I1219 03:54:52.412847   56230 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:54:52.412890   56230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:54:52.437131   56230 system_svc.go:56] duration metric: took 24.267658ms WaitForService to wait for kubelet
	I1219 03:54:52.437166   56230 kubeadm.go:587] duration metric: took 466.551246ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:54:52.437188   56230 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:52.440753   56230 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:52.440776   56230 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:52.440789   56230 node_conditions.go:105] duration metric: took 3.595658ms to run NodePressure ...
	I1219 03:54:52.440804   56230 start.go:242] waiting for startup goroutines ...
	I1219 03:54:52.571235   56230 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:54:52.579720   56230 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:54:52.588696   56230 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:54:52.607999   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:52.623079   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:54:52.623103   56230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:54:52.632201   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:52.689775   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:54:52.689802   56230 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:54:52.755241   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:52.755280   56230 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:54:52.860818   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:51.531836   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.032945   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.532771   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.031681   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.532510   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.032369   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.532915   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.031905   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.531152   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.032011   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.502165   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.002813   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.501582   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.002986   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.501711   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.000984   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.502399   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.002200   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.502369   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.002000   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.655285   56230 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (2.066552827s)
	I1219 03:54:54.655390   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:54:54.655405   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.047371795s)
	I1219 03:54:54.655528   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.023298979s)
	I1219 03:54:54.655657   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.794802456s)
	I1219 03:54:54.655684   56230 addons.go:500] Verifying addon metrics-server=true in "default-k8s-diff-port-168174"
	I1219 03:54:57.969258   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.313828747s)
	I1219 03:54:57.969346   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:58.498709   56230 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-168174"
	I1219 03:54:58.501734   56230 out.go:179] * Verifying dashboard addon...
	I1219 03:54:58.504348   56230 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:54:58.510036   56230 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:54:58.510056   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.010436   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.532022   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.031585   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.531985   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.032925   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.533378   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.032504   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.530653   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.031045   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.531549   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.030879   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.502926   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.001807   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.501672   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.001239   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.501991   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.001622   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.501692   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.002517   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.502331   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.001757   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.508121   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.008244   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.507884   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.008223   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.507861   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.012677   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.507898   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.008121   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.508367   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.008842   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.531235   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.031845   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.531542   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.030822   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.532087   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.032140   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.532095   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.032183   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.532546   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.031699   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.501262   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.001782   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.501640   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.002705   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.501849   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.001647   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.502225   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.002170   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.502397   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.003244   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.508346   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.007493   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.507987   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.007825   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.507635   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.008062   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.507047   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.008442   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.510089   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.008180   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.536198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.032221   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.532227   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.032198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.531813   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.031889   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.531666   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.031122   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.532149   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.031983   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.502642   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.001743   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.502066   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.002044   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.502017   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.002386   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.502467   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.002107   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.502677   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.002161   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.507112   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.008461   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.508312   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.008611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.508384   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.008280   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.508541   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.008623   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.508431   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.009349   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.532619   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.031875   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.532589   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.031244   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.531877   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.031690   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.531758   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.032196   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.531803   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.030943   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.502018   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.002044   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.502072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.002330   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.502958   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.001850   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.501605   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.001853   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.501780   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.001784   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.508124   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.008333   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.508545   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.008130   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.507985   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.007539   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.508141   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.007919   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.507523   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.008842   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.532418   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.032219   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.532547   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.032233   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.532551   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.033166   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.531532   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.031971   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.532050   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.032787   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.501956   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.002220   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.502226   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.003355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.501800   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.001708   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.501127   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.003195   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.502775   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.002072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.507432   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.008746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.508268   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.008770   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.508749   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.009746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.509595   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.008351   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.508700   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.009427   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.532398   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.033297   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.531966   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.032953   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.532813   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.032632   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.531743   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.031446   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.531999   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.032229   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.502200   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.002490   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.502281   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.002814   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.502200   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.001250   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.502303   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.003201   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.501897   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.001740   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.508429   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.008390   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.507941   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.007624   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.508269   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.008250   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.508598   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.008371   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.508380   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.008493   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.531979   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.031753   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.531087   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.031427   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.533856   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.032558   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.532153   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.031923   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.531960   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.032601   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.501692   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.001922   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.501325   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.003828   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.502896   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.002912   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.501760   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.001551   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.503707   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.002109   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.508499   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.009212   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.508512   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.008223   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.508681   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.008636   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.508533   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.008248   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.507749   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.010179   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.531439   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.033650   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.532006   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.033362   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.532163   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.032485   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.532885   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.032795   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.531588   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.032179   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.502338   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.001955   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.502091   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.002307   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.502793   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.000849   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.501606   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.001431   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.502037   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.001873   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.507899   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.009735   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.508708   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.008927   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.508321   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.008289   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.507348   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.009029   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.507232   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.007368   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.532210   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.032304   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.531955   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.031259   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.532301   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.032157   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.531594   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.032495   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.532008   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.032133   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.501770   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.002435   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.502300   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.002307   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.501724   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.002293   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.503636   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.001410   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.504029   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.001789   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.508096   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.009356   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.507852   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.007460   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.508444   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.008364   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.507697   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.008880   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.508861   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.008835   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.532091   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.032010   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.531306   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.031852   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.531186   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.032131   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.531205   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.032529   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.532677   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.033016   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.502472   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.001435   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.502091   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.001734   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.501352   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.002340   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.502315   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.002534   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.501024   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.001249   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.507519   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.008950   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.507774   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.009594   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.507884   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.007928   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.507777   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.009168   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.507455   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.009287   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.532161   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.032066   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.531975   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.031583   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.532654   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.033122   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.531676   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.031185   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.532468   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.032385   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.501786   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.002482   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.502524   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.001342   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.502134   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.003763   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.502136   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.001766   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.502345   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.001599   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.508543   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.009242   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.508054   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.009144   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.508104   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.008088   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.507250   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.009098   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.507611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.010519   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.531780   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.031001   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.532489   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.032242   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.536320   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.033455   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.532129   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.031767   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.531204   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.031365   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.503558   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.001726   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.501144   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.001613   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.502734   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.002274   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.501831   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.001426   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.503884   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.001283   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.508611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.009353   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.507657   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.007664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.507861   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.007544   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.507689   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.008330   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.508469   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.009715   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.532345   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.032801   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.531689   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.032877   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.532081   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.032107   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.531920   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.031409   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.532046   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.032408   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.501828   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.001518   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.502563   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.002507   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.501333   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.002564   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.502379   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.001485   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.501810   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.001402   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.508191   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.008241   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.508833   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.008453   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.508563   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.008613   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.509524   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.008844   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.507854   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.007055   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.532493   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.033676   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.532206   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.031784   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.532118   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.032496   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.532286   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.032724   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.533137   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.031294   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.502666   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.001524   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.501177   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.001644   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.503328   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.002433   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.502361   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.002735   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.501301   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.001765   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.508242   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.008660   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.507962   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.008796   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.508749   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.009651   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.508080   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.008550   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.509473   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.008511   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.533457   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.032379   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.532473   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.032865   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.531464   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.032764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.531236   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.032148   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.532471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.032216   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.502684   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.002188   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.503237   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.001912   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.501622   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.001891   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.502012   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.001650   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.502856   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.001921   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.507699   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.008027   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.508703   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.008209   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.508178   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.008432   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.509550   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.008319   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.507828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.007561   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.531544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.032519   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.532198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.032915   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.531514   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.032723   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.531505   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.033182   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.531615   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.032916   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.501854   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.001080   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.503363   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.002618   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.502840   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.000881   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.501714   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.002610   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.502008   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.001866   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.508549   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.007753   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.508465   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.008319   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.508222   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.007904   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.508163   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.008464   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.508145   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.007728   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.531667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.033191   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.531547   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.032788   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.532591   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.033086   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.531237   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.032101   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.532279   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.032764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.501636   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.001241   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.501915   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.001761   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.501797   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.001851   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.502732   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.001114   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.502538   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.001630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.508503   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.009432   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.508442   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.008564   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.508754   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.008668   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.508947   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.007984   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.507426   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.008776   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.531412   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.031826   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.531169   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.032838   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.531368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.033085   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.531343   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.032505   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.532373   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.032078   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.501962   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.001801   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.502380   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.001940   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.501661   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.001355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.501727   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.002704   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.502515   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.001261   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.508926   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.008697   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.508155   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.008403   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.509752   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.009152   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.507692   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.008539   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.508833   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.008403   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.532212   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.031709   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.531512   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.032880   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.531683   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.032225   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.532187   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.032017   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.530954   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.031969   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.502513   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.001736   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.502118   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.001728   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.502479   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.002783   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.502414   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.002781   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.501809   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.002598   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.507936   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.007414   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.508924   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.007756   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.509607   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.008188   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.508901   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.009164   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.507936   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.007349   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.532294   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.033050   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.532115   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.031971   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.531279   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.032256   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.531863   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.031763   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.531164   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.031290   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.502730   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.001984   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.502287   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.002224   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.502985   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.000948   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.501630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.001169   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.502075   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.002834   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.508225   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.007739   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.508108   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.008481   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.508746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.008298   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.507944   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.008428   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.507905   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.007728   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.531448   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.032595   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.532096   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.031394   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.532851   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.032534   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.532843   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.031994   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.533667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.033061   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.501275   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.003274   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.502492   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.002263   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.501814   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.002188   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.502456   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.002449   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.503413   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.002514   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.508385   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.008219   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.509237   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.007998   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.507734   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.008610   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.509142   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.008330   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.507609   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.009119   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.531626   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.032337   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.532298   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.032378   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.531679   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.032529   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.532155   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.031828   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.531299   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.031239   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.502830   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.001989   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.502331   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.002798   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.502197   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.001852   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.501952   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.001753   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.501421   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.002328   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.508315   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.008862   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.508067   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.008030   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.507755   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.008786   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.507672   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.009365   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.509016   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.007277   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.531667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.031610   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.532096   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.032319   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.532500   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.031773   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.531561   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.032598   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.531974   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.031362   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.502152   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.001130   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.502479   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.002251   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.501762   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.000846   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.502253   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.002765   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.502160   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.001409   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.508190   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.008459   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.508829   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.007664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.509469   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.009747   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.509579   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.009682   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.508738   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.008970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.532197   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.031685   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.532322   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.031885   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.531778   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.031643   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.531467   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.031815   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.531155   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.031720   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.503475   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.001639   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.501436   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.002712   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.501897   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.001181   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.501530   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.000985   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.501730   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.001514   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.508263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.007505   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.508726   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.008230   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.508664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.008997   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.507428   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.008379   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.508549   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.007995   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.531536   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.032617   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.535990   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.031685   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.533156   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.031587   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.532830   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.031367   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.532930   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.031943   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.502386   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.002215   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.503037   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.001428   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.502319   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.002311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.502140   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.002283   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.502150   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.002240   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.507946   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.008416   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.508631   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.008561   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.508912   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.008658   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.509386   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.008665   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.509011   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.008072   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.533032   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.032143   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.531588   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.032371   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.533496   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.033023   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.531133   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.032394   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.532243   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.031898   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.502405   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.001877   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.505174   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.002029   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.502125   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.001824   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.501660   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.002257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.502497   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.002911   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.509042   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.008740   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.507989   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.007873   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.508285   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.007091   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.508238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.508597   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.009516   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.531381   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.032162   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.531649   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.032718   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.532156   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.033496   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.533930   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.032368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.532625   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.032661   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.501952   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.001604   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.501905   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.002311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.501777   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.001546   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.502154   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.002455   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.503055   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.001472   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.508050   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.008080   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.507970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.007844   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.508056   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.007765   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.508456   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.007981   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.508855   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.008604   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.532081   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.032702   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.531078   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.031663   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.531993   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.033077   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.531457   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.032927   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.531699   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.031008   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.502839   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.001682   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.501484   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.003428   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.502649   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.002047   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.501936   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.001951   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.502955   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.002709   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.509628   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.008629   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.509037   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.008098   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.508408   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.009392   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.507832   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.008540   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.509468   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.008988   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.532091   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.032487   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.532767   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.032348   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.533265   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.032832   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.533225   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.032480   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.531859   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.031535   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.502389   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.002611   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.501620   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.002058   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.502778   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.002073   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.501287   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.001492   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.503034   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.002058   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.507218   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.008007   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.507903   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.008002   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.508538   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.009106   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.509031   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.008019   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.508250   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.009604   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.532463   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.032668   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.531757   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.031273   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.533278   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.032950   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.531375   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.032433   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.532764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.031941   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.501829   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.001397   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.502802   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.001851   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.503206   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.001481   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.502653   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.002180   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.501887   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.001927   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.509024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.007589   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.509073   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.008555   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.508449   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.008256   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.508501   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.009916   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.508490   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.008336   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.531904   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.031168   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.532025   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.032276   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.531968   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.032764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.531973   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.031624   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.532201   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.032129   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.502278   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.001507   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.501338   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.002753   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.501620   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.001545   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.502545   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.001650   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.501704   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.001060   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.508006   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.007837   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.509358   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.508132   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.007983   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.508981   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.007803   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.507769   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.009970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.532685   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.033023   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.531348   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.031614   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.533370   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.032795   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.531237   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.032033   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.532778   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.031294   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.502337   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.002204   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.501845   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.002344   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.503195   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.002894   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.501979   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.002008   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.501981   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.001740   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.507806   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.009357   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.508695   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.008959   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.509725   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.008245   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.507606   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.008218   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.507870   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.007087   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.532257   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.032024   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.532220   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.031647   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.532123   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.032889   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.532444   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.032621   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.532943   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.031712   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.501355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.002083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.501469   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.002554   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.501408   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.002216   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.502543   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.001754   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.501454   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.002870   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.507033   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.007862   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.509097   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.008460   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.509108   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.007794   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.508514   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.009784   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.508154   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.008565   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.531552   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.032724   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.531968   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.031728   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.531786   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.032471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.531802   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.032162   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.532320   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.031297   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.503203   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.002682   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.502066   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.001775   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.501403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.002298   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.502073   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.001483   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.501639   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.002266   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.508296   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.008881   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.508078   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.007871   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.508564   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.008609   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.507625   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.008815   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.507996   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.009033   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.531916   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.032003   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.535669   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.032260   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.533368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.032732   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.532734   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.031076   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.531706   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.031411   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.502350   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.002202   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.502113   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.002611   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.501323   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.002251   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.501726   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.003470   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.502490   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.002224   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.507379   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.007665   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.508534   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.009007   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.509344   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.007746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.508532   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.009346   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.507367   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.009828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.532471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.032182   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.531696   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.031891   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.531523   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.032527   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.531544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.033055   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.532251   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.032012   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.501701   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.001815   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.501403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.001721   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.502408   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.006350   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.502718   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.000975   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.502050   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.001993   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.507665   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.010022   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.507891   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.017962   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.509387   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.009499   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.508592   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.007712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.509159   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.008024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.532417   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.032702   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.531960   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.032030   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.532438   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.032562   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.532541   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.031906   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.533707   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.031481   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.501333   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.002706   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.501390   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.003083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.501477   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.003243   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.502051   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.002119   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.502250   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.001892   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.508467   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.007934   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.508461   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.009263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.508676   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.007597   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.508263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.008661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.508545   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.008653   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.533009   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.032493   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.532027   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.032116   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.531261   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.034181   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.531702   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.032409   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.533808   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.031246   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.501444   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.002084   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.501717   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.002397   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.502329   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.002161   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.501724   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.001096   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.501676   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.001373   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.508793   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.009558   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.508307   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.008745   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.508478   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.009365   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.508285   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.008394   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.507659   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.008883   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.531671   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.032663   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.532654   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.032443   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.531860   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.031786   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.532418   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.032031   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.531026   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.031184   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.502311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.001761   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.501921   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.001779   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.502884   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.000815   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.502204   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.002552   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.502487   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.002005   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.509248   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.008315   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.507712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.009764   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.509368   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.007428   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.508548   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.008954   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.508930   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.008936   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.532311   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.032156   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.531768   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.031259   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.532112   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.032440   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.533083   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.031470   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.533077   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.031626   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.503116   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.002138   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.502257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.002721   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.501511   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.002183   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.502306   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.002714   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.501224   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.003247   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.508715   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.008752   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.509114   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.007677   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.508804   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.009618   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.508120   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.007885   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.507480   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.008978   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.532146   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.031615   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.532552   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.031381   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.532386   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.032461   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.533200   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.032375   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.531718   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.030828   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.502028   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.001762   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.501418   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.002914   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.501869   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.001896   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.501339   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.002565   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.502667   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.001134   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.507828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.008203   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.508364   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.008929   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.508117   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.007662   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.507899   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.008710   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.507212   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.008474   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.532845   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.032290   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.532646   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.031957   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.531378   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.032264   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.531928   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.031473   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.532386   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.032382   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.502231   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.002752   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.500970   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.000924   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.501030   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.002189   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.502781   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.002623   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.501117   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.001792   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.508109   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.008892   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.508228   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.007643   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.508278   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.009399   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.508216   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.008474   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.507952   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.008596   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.532465   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.032800   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.531643   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.031616   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.533745   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.031460   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.532616   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.032471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.532228   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.031437   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.501355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.001764   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.501298   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.003052   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.502950   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.001770   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.501738   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.003204   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.503749   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.000964   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.508615   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.009187   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.507594   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.009258   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.508166   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.008876   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.508828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.009323   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.507635   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.008857   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.532499   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.033303   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.532140   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.031451   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.532012   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.031739   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.531969   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.031026   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.531884   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.032850   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.501466   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.002962   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.501319   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.002095   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.501455   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.002904   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.502079   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.002351   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.502139   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.002366   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.507536   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.009458   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.508342   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.008114   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.507689   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.008772   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.508175   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.008253   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.508521   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.010486   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.531019   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.032116   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.531731   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.031746   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.531610   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.032124   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.531488   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.032358   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.532561   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.032192   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.502021   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.001431   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.502831   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.001874   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.501461   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.002135   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.502101   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.002403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.501826   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.001388   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.508693   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.008934   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.507098   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.007956   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.508938   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.007971   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.508613   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.009088   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.507422   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.008448   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.531909   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.031872   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.532556   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.032306   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.532154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.032667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.531742   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.032077   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.531946   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.033451   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.502067   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.002320   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.501957   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.002135   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.501241   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.002784   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.502988   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.004826   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.502313   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.002638   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.507745   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.009163   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.508092   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.008607   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.508116   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.008371   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.507434   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.008847   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.507621   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.008655   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.532124   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.032109   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.531627   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.031388   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.532769   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.031521   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.531483   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.032091   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.532187   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.031753   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.502460   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.002540   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.501945   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.002223   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.501542   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.001659   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.501286   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.002482   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.502722   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.001266   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.507988   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.009496   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.509180   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.008698   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.508772   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.008904   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.508816   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.009066   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.507818   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.008395   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.531785   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.031722   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.531144   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.031857   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.531058   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.032168   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.532777   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.032608   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.531658   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.032994   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.501701   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.002308   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.502069   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.002072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.501731   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.002148   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.503078   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.003123   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.501899   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.002103   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.507702   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.009409   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.508752   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.009166   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.508117   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.009342   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.508229   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.007650   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.514151   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.008149   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.531183   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.030952   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.531064   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.032714   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.532410   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.031666   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.531454   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.032368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.532161   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.031779   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.502176   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.001419   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.502257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.002485   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.501904   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.001645   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.501262   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.002789   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.502720   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.001933   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.507580   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.008671   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.508761   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.009888   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.508049   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.009018   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.508299   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.009024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.507584   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.008065   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.530966   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.031880   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.531265   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.031652   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.532860   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.031804   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.532296   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.031908   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.531566   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.031333   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.501384   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.002328   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.501432   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.002402   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.502445   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.004922   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.501916   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.002619   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.501038   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.001821   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.507960   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.008882   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.508735   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.009370   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.508266   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.009541   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.508167   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.008293   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.509228   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.008514   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.531404   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.032313   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.532704   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.033420   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.532159   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.032178   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.531613   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.035741   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.532501   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.033104   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.502173   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.002026   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.501239   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.001300   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.503227   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.001826   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.501434   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.003235   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.502432   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.002356   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.507884   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.008334   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.508168   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.008274   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.508025   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.008228   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.507713   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.008537   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.508684   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.009919   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.532599   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.035420   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.531992   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.031944   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.531194   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.032224   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.531672   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.031544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.531967   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.031448   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.501782   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.001444   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.503454   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.002767   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.501906   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.001726   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.502123   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.005942   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.501817   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.001941   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.507853   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.008476   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.508667   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.008722   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.509046   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.008778   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.508906   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.008492   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.508647   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.007815   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.532200   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.031966   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.531791   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.033536   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.532652   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.032201   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.531928   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.033359   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.533670   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.032187   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.501934   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.002902   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.501267   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.002601   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.501489   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.002545   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.501360   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.002042   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.503032   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.001085   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.507611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.008196   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.509732   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.009055   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.508388   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.007995   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.507537   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.008854   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.508167   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.008019   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.531647   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.034444   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.532628   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.032333   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.531736   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.032056   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.531916   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.031464   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.532198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.032089   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.501603   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.001216   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.502879   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.001292   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.501341   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.002410   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.502804   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.002021   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.502279   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.002340   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.507566   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.008774   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.509162   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.009209   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.507648   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.009824   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.508631   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.009013   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.507653   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.008464   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.531694   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.032157   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.532431   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.031890   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.533074   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.032602   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.531803   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.032839   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.531064   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.033390   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.502372   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.001862   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.502294   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.001477   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.503184   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.002218   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.502643   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.001877   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.503311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.002436   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.508296   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.008304   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.508381   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.008490   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.508829   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.007834   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.508400   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.008794   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.509376   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.008146   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.531920   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.033659   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.532892   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.031391   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.532537   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.033029   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.530956   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.031585   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.533148   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.031532   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.502341   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.002087   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.501994   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.001651   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.501441   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.002140   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.501765   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.001241   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.502479   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.002437   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.508235   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.008483   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.508534   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.008744   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.508702   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.008924   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.507985   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.007421   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.507911   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.008590   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.532045   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.031418   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.532867   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.031333   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.532360   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.032704   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.531535   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.033276   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.532090   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.032674   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.503195   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.001544   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.501650   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.001446   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.503141   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.001293   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.501933   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.001485   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.501393   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.001793   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.508830   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.008286   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.508322   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.008679   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.509263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.008010   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.507661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.008954   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.508712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.008648   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.531115   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.033681   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.532204   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.031525   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.532706   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.031154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.531400   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.032686   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.531016   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.031694   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.500799   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.001437   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.503087   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.001262   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.502070   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.001597   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.501748   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.000952   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.503068   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.002924   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.508721   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.009360   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.507561   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.008024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.509438   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.008003   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.509182   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.007694   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.509204   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.008075   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.531475   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.032236   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.531623   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.032627   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.531328   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.032263   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.531544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.031759   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.531968   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.031169   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.502523   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.001089   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.502166   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.002297   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.501900   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.002177   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.501990   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.001761   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.503411   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.001888   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.508661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.008645   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.509700   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.509485   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.008711   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.508528   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.009157   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.508329   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.008371   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.532470   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.033506   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.532332   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.032618   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.532408   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.032700   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.532680   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.030763   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.531486   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.032694   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.501870   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.001255   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.502146   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.001892   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.502373   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.001923   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.502476   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.001982   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.502446   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.003222   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.508346   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.008513   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.509470   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.009002   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.508067   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.007514   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.508798   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.008828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.508496   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.531146   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.031591   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.532375   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.033082   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.531649   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.031902   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.532588   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.032880   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.532136   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.028606   55595 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 04:00:01.028642   55595 kapi.go:107] duration metric: took 6m0.000598506s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 04:00:01.028754   55595 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 04:00:01.030295   55595 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1219 04:00:01.031288   55595 addons.go:546] duration metric: took 6m6.695311639s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1219 04:00:01.031318   55595 start.go:247] waiting for cluster config update ...
	I1219 04:00:01.031329   55595 start.go:256] writing updated cluster config ...
	I1219 04:00:01.031596   55595 ssh_runner.go:195] Run: rm -f paused
	I1219 04:00:01.039401   55595 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:01.043907   55595 pod_ready.go:83] waiting for pod "coredns-7d764666f9-s7729" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.050711   55595 pod_ready.go:94] pod "coredns-7d764666f9-s7729" is "Ready"
	I1219 04:00:01.050733   55595 pod_ready.go:86] duration metric: took 6.803187ms for pod "coredns-7d764666f9-s7729" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.053765   55595 pod_ready.go:83] waiting for pod "etcd-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.058213   55595 pod_ready.go:94] pod "etcd-no-preload-298059" is "Ready"
	I1219 04:00:01.058234   55595 pod_ready.go:86] duration metric: took 4.447718ms for pod "etcd-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.060300   55595 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.065142   55595 pod_ready.go:94] pod "kube-apiserver-no-preload-298059" is "Ready"
	I1219 04:00:01.065166   55595 pod_ready.go:86] duration metric: took 4.840116ms for pod "kube-apiserver-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.067284   55595 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.445171   55595 pod_ready.go:94] pod "kube-controller-manager-no-preload-298059" is "Ready"
	I1219 04:00:01.445200   55595 pod_ready.go:86] duration metric: took 377.900542ms for pod "kube-controller-manager-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.645417   55595 pod_ready.go:83] waiting for pod "kube-proxy-mdfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.044330   55595 pod_ready.go:94] pod "kube-proxy-mdfxl" is "Ready"
	I1219 04:00:02.044377   55595 pod_ready.go:86] duration metric: took 398.907218ms for pod "kube-proxy-mdfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.245766   55595 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.645879   55595 pod_ready.go:94] pod "kube-scheduler-no-preload-298059" is "Ready"
	I1219 04:00:02.645937   55595 pod_ready.go:86] duration metric: took 400.143888ms for pod "kube-scheduler-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.645954   55595 pod_ready.go:40] duration metric: took 1.606522986s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:02.697158   55595 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 04:00:02.698980   55595 out.go:179] * Done! kubectl is now configured to use "no-preload-298059" cluster and "default" namespace by default
	I1219 03:59:58.502543   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.001139   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.501649   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.001415   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.502374   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.002272   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.502072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.002694   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.501377   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.002499   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.508999   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.009465   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.508462   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.008805   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.509068   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.007682   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.508807   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.009533   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.509171   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.008344   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.501482   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.002080   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.502514   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.002257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.502741   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.001565   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.502968   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.002364   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.502630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.001239   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.508168   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.007952   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.508714   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.008805   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.508239   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.009278   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.509811   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.008945   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.513267   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.008127   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.502641   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.002630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.501272   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.001592   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.502177   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.002030   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.501972   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.001917   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.502061   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.001824   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.508106   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.007937   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.507970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.008418   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.508614   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.007994   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.508452   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.008632   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.510343   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.008029   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.501559   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.000819   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.501592   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.002062   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.501962   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.001720   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.502083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.002024   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.501681   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.001502   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.507866   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.009254   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.508704   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.008650   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.508846   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.010798   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.507933   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.009073   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.508337   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.008331   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.502462   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.003975   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.501373   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.002075   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.502437   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.001953   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.501417   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.003083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.501515   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.001553   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.509712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.008196   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.507361   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.008284   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.508302   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.007728   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.509259   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.008711   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.509664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.008507   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.502079   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.001986   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.501922   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.001179   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.502972   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.002165   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.502809   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.001369   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.502083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.002507   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.508264   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.008006   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.509488   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.008519   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.508978   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.008309   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.508775   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.009625   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.508731   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.009043   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.502787   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.001831   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.502430   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.998860   55957 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 04:00:29.998886   55957 kapi.go:107] duration metric: took 6m0.000824832s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 04:00:29.998960   55957 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 04:00:30.000498   55957 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1219 04:00:30.001513   55957 addons.go:546] duration metric: took 6m7.141140342s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1219 04:00:30.001540   55957 start.go:247] waiting for cluster config update ...
	I1219 04:00:30.001550   55957 start.go:256] writing updated cluster config ...
	I1219 04:00:30.001800   55957 ssh_runner.go:195] Run: rm -f paused
	I1219 04:00:30.010656   55957 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:30.015390   55957 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9ptrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.020029   55957 pod_ready.go:94] pod "coredns-66bc5c9577-9ptrv" is "Ready"
	I1219 04:00:30.020051   55957 pod_ready.go:86] duration metric: took 4.638733ms for pod "coredns-66bc5c9577-9ptrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.022246   55957 pod_ready.go:83] waiting for pod "etcd-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.026208   55957 pod_ready.go:94] pod "etcd-embed-certs-244717" is "Ready"
	I1219 04:00:30.026224   55957 pod_ready.go:86] duration metric: took 3.954396ms for pod "etcd-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.028026   55957 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.033934   55957 pod_ready.go:94] pod "kube-apiserver-embed-certs-244717" is "Ready"
	I1219 04:00:30.033951   55957 pod_ready.go:86] duration metric: took 5.905842ms for pod "kube-apiserver-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.036019   55957 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.417680   55957 pod_ready.go:94] pod "kube-controller-manager-embed-certs-244717" is "Ready"
	I1219 04:00:30.417709   55957 pod_ready.go:86] duration metric: took 381.673199ms for pod "kube-controller-manager-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.616122   55957 pod_ready.go:83] waiting for pod "kube-proxy-p8gvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.015548   55957 pod_ready.go:94] pod "kube-proxy-p8gvm" is "Ready"
	I1219 04:00:31.015585   55957 pod_ready.go:86] duration metric: took 399.442531ms for pod "kube-proxy-p8gvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.216107   55957 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.615784   55957 pod_ready.go:94] pod "kube-scheduler-embed-certs-244717" is "Ready"
	I1219 04:00:31.615816   55957 pod_ready.go:86] duration metric: took 399.682179ms for pod "kube-scheduler-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.615832   55957 pod_ready.go:40] duration metric: took 1.605153664s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:31.662639   55957 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 04:00:31.664208   55957 out.go:179] * Done! kubectl is now configured to use "embed-certs-244717" cluster and "default" namespace by default
	I1219 04:00:29.508455   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:30.007925   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:30.507876   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:31.007766   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:31.509691   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:32.008321   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:32.509128   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:33.008511   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:33.509110   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:34.008834   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:34.508661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:35.009145   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:35.510268   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:36.007810   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:36.508457   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:37.008511   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:37.508340   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:38.008906   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:38.508226   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:39.007515   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:39.508398   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:40.008048   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:40.507411   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:41.008044   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:41.509491   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:42.008720   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:42.508893   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:43.008890   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:43.507746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:44.008735   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:44.508515   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:45.008316   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:45.508925   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:46.007410   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:46.507809   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:47.007816   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:47.507934   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:48.008317   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:48.511438   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:49.008355   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:49.508479   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:50.008867   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:50.507492   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:51.008220   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:51.508283   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:52.008800   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:52.508617   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:53.008464   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:53.508878   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:54.008198   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:54.509007   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:55.007919   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:55.507118   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:56.008201   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:56.507989   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:57.007872   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:57.508142   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:58.008008   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:58.504601   56230 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 04:00:58.504633   56230 kapi.go:107] duration metric: took 6m0.000289249s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 04:00:58.504722   56230 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 04:00:58.506261   56230 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1219 04:00:58.507432   56230 addons.go:546] duration metric: took 6m6.536744168s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1219 04:00:58.507471   56230 start.go:247] waiting for cluster config update ...
	I1219 04:00:58.507487   56230 start.go:256] writing updated cluster config ...
	I1219 04:00:58.507818   56230 ssh_runner.go:195] Run: rm -f paused
	I1219 04:00:58.516094   56230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:58.521203   56230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dnfcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.526011   56230 pod_ready.go:94] pod "coredns-66bc5c9577-dnfcc" is "Ready"
	I1219 04:00:58.526035   56230 pod_ready.go:86] duration metric: took 4.809568ms for pod "coredns-66bc5c9577-dnfcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.528592   56230 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.534102   56230 pod_ready.go:94] pod "etcd-default-k8s-diff-port-168174" is "Ready"
	I1219 04:00:58.534119   56230 pod_ready.go:86] duration metric: took 5.507213ms for pod "etcd-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.536078   56230 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.540931   56230 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-168174" is "Ready"
	I1219 04:00:58.540951   56230 pod_ready.go:86] duration metric: took 4.854792ms for pod "kube-apiserver-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.542905   56230 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.920520   56230 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-168174" is "Ready"
	I1219 04:00:58.920546   56230 pod_ready.go:86] duration metric: took 377.623833ms for pod "kube-controller-manager-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:59.120738   56230 pod_ready.go:83] waiting for pod "kube-proxy-zs4wg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:59.520222   56230 pod_ready.go:94] pod "kube-proxy-zs4wg" is "Ready"
	I1219 04:00:59.520254   56230 pod_ready.go:86] duration metric: took 399.487462ms for pod "kube-proxy-zs4wg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:59.721383   56230 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:01:00.120982   56230 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-168174" is "Ready"
	I1219 04:01:00.121009   56230 pod_ready.go:86] duration metric: took 399.598924ms for pod "kube-scheduler-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:01:00.121020   56230 pod_ready.go:40] duration metric: took 1.604899766s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:01:00.167943   56230 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 04:01:00.169437   56230 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-168174" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.911032635Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6df5f0cd-daa8-4469-8230-f2babe21f475 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.911559721Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117372911528618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136931,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6df5f0cd-daa8-4469-8230-f2babe21f475 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.912805388Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c869aa34-2948-438a-b393-106d60af5905 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.913076772Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c869aa34-2948-438a-b393-106d60af5905 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.913406273Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f7f1d6992811a3f809cabb4d69d174028bcee28fdbabaface79c4622f2b40bd,PodSandboxId:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116493237678965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2c0887c51c89e0e584dc73353e522ffc4668546f521d1bb624d09f93ffb862,PodSandboxId:6cef58f979bdcf009b1664773464c0ad2edaf1a7e687c8d62dfee39025f1247f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116469616408924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5641715a-fb85-45c8-b1e2-de3c394086ed,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411653e4250dfe0da57038d14bc69022addd7ec37037fc8a8955e1655974a27,PodSandboxId:0614affd1728ae851d74f833d7d34176fd6ce8e8688ac8c4f9ee867519f3c2ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766116466143726495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9ptrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22226444-faa6-420d-a862-1ef0441a80e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:954447f0c968092e3dfb23f85b41f94836775c9f1bc23d1e84e6a62e0c4f6749,PodSandboxId:ed4e137eed69f1576ef7b849ef51524cd10889219acb0222c0ad3319f593ed6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766116462507695033,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8gvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 283607b2-9e6c-44f4-9c9d-7d713c71fb8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc03b0b75719a3766a04010c15c50585e5b9d66412e39a40f987b194e653af8,PodSandboxId:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116462409761046,Labels:map[string]string{io.kubernetes.container.name: storage-provi
sioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c00fb043f114c6d4ca21d53aa260dcba5ec8b805a2caba68166dab5cad8572,PodSandboxId:4c181a507e6b05a84a1966aebd53b5eeb271f50ebc1db5d936520d1e47685492,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766116457998299600,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manag
er,io.kubernetes.pod.name: kube-controller-manager-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed550c60240cd3e16a8090bdf714aad,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e133fc618150f001c2cb9f53d2de00fb776e3bddc7a2b3672a3542025045aa71,PodSandboxId:d3374678022d82759234913a106fbe11b79862f9660c0389e5a1ae91fccd69ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTA
INER_RUNNING,CreatedAt:1766116457981525503,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc709cdbf261d7f78621b653d0027b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e68b6704fdf3c1afa48de9036c1b5beeb789d7c238d995602002285db55f9c7,PodSandboxId:cb79701937629a720cbd58b60479edfa60b769186b690e52241d588128c3a052,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766116457963694981,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3849b25e9ef521e7689e47039ae86b1a,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1d9289f2c9d67ab5f9296e09735ae1579ea00b277c7b6055c4c99d35345cd39,PodSandboxId:a4a72d74a0d7973721c035b2f02eb51fb3a397ecb63af2ca599f833e8e931d40,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3
d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766116457957635365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42497a262dfe4f576d621089344401ac,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c869aa34-2948-438a-b393-106d60af5905 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.935773576Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ed96659-ab86-4326-82a1-32463086e738 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.936089977Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1a1db854745cdc3e34fd3bbc3ef18539f9fcd32b32e7edfcb77db507876edf83,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-kong-9849c64bd-ghhd7,Uid:051c3643-370d-478b-a0d6-5012d03a4d3e,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116470019661776,Labels:map[string]string{app: kubernetes-dashboard-kong,app.kubernetes.io/component: app,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/managed-by: Helm,app.kubernetes.io/name: kong,app.kubernetes.io/version: 3.9,helm.sh/chart: kong-2.52.0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-kong-9849c64bd-ghhd7,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 051c3643-370d-478b-a0d6-5012d03a4d3e,pod-template-hash: 9849c64bd,version: 3.9,},Annotations:map[string]string{kubernetes.io/config.seen:
2025-12-19T03:54:29.644094696Z,kubernetes.io/config.source: api,kuma.io/gateway: enabled,kuma.io/service-account-token-volume: kubernetes-dashboard-kong-token,traffic.sidecar.istio.io/includeInboundPorts: ,},RuntimeHandler:,},&PodSandbox{Id:2eb3e6089abbf93ca12d005ae2ef05931a11614ff922972ef8a48253d1847c1b,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-auth-6b55998857-99nts,Uid:be79c314-fcbc-410f-a245-ca04752aeb23,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116470016641511,Labels:map[string]string{app.kubernetes.io/component: auth,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/managed-by: Helm,app.kubernetes.io/name: kubernetes-dashboard-auth,app.kubernetes.io/part-of: kubernetes-dashboard,app.kubernetes.io/version: 1.4.0,helm.sh/chart: kubernetes-dashboard-7.14.0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-auth-6b55998857-99nts,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: be79c314-fcbc-4
10f-a245-ca04752aeb23,pod-template-hash: 6b55998857,},Annotations:map[string]string{checksum/config: ed9eece39e9fe218fa5fb9bf2428a78dc19b578c344e94d7b6271706ba6fd4ae,kubernetes.io/config.seen: 2025-12-19T03:54:29.599499609Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:924f40151977e5bfb1ccaff03f56e971844d9c175c836bb342aba5ea2f11b035,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-api-677b969f5d-xr86s,Uid:2468eb14-0ebb-45fd-abf4-63a8e1309258,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116469969038932,Labels:map[string]string{app.kubernetes.io/component: api,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/managed-by: Helm,app.kubernetes.io/name: kubernetes-dashboard-api,app.kubernetes.io/part-of: kubernetes-dashboard,app.kubernetes.io/version: 1.14.0,helm.sh/chart: kubernetes-dashboard-7.14.0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-api-677b969f5d-xr86s,io.kubernetes.pod.namespace: kubernetes-d
ashboard,io.kubernetes.pod.uid: 2468eb14-0ebb-45fd-abf4-63a8e1309258,pod-template-hash: 677b969f5d,},Annotations:map[string]string{checksum/config: e55e0dd787e7da9854c0366ab3f9b6db13be0ca8f29de374e28a6752c7f2ec0f,kubernetes.io/config.seen: 2025-12-19T03:54:29.589669443Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a40ff7ab2ef0ef653d2a04f502a3d4c85b02e047ef168058243a49cb1d8ff72,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-web-5c9f966b98-7jhl7,Uid:9e539d4c-644f-4905-a4e7-222f6b6aa324,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116469953769312,Labels:map[string]string{app.kubernetes.io/component: web,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/managed-by: Helm,app.kubernetes.io/name: kubernetes-dashboard-web,app.kubernetes.io/part-of: kubernetes-dashboard,app.kubernetes.io/version: 1.7.0,helm.sh/chart: kubernetes-dashboard-7.14.0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-web-5c9f966b98-7
jhl7,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 9e539d4c-644f-4905-a4e7-222f6b6aa324,pod-template-hash: 5c9f966b98,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T03:54:29.596715170Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a612b72097dccd9b84781240ca83758eb2f02f534e7c3334471dba5eda2b275,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz,Uid:aed4238c-131b-42c2-8c9f-f75f42efd32a,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116469952054264,Labels:map[string]string{app.kubernetes.io/component: metrics-scraper,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/managed-by: Helm,app.kubernetes.io/name: kubernetes-dashboard-metrics-scraper,app.kubernetes.io/part-of: kubernetes-dashboard,app.kubernetes.io/version: 1.2.2,helm.sh/chart: kubernetes-dashboard-7.14.0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-metrics-scraper-
7685fd8b77-gpdxz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: aed4238c-131b-42c2-8c9f-f75f42efd32a,pod-template-hash: 7685fd8b77,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T03:54:29.592785386Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6cef58f979bdcf009b1664773464c0ad2edaf1a7e687c8d62dfee39025f1247f,Metadata:&PodSandboxMetadata{Name:busybox,Uid:5641715a-fb85-45c8-b1e2-de3c394086ed,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116465761431845,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5641715a-fb85-45c8-b1e2-de3c394086ed,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T03:54:21.838819746Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0614affd1728ae851d74f833d7d34176fd6ce8e8688ac8c4f9ee867519f3c2ea,Metadata:&PodSandboxMetadata{Name:coredns-66
bc5c9577-9ptrv,Uid:22226444-faa6-420d-a862-1ef0441a80e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116465759432629,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-9ptrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22226444-faa6-420d-a862-1ef0441a80e4,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T03:54:21.838824523Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b2bb22ff667ff94ba5c3e7035762c398a5997125a1fd465d4c53b461ca2bd240,Metadata:&PodSandboxMetadata{Name:metrics-server-746fcd58dc-x74d4,Uid:e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116463963186082,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-746fcd58dc-x74d4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4a33dcc-d3b6-45a0-92d7-6cfbc5df35
b2,k8s-app: metrics-server,pod-template-hash: 746fcd58dc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T03:54:21.838834019Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:99ff9c60-2f30-457a-8cb5-e030eb64a58e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116462168175506,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storag
e-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-19T03:54:21.838831720Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ed4e137eed69f1576ef7b849ef51524cd10889219acb0222c0ad3319f593ed6b,Metadata:&PodSandboxMetadata{Name:kube-proxy-p8gvm,Uid:283607b2-9e6c-44f4-9c9d-7d713c71fb8c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116462167314974,Labels:map[string]string{controller-revision-hash: 55c7cb7b75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-p8gvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 283607b2-9e6c-44f4-9c9d-7d713c71fb8c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T03:54:21.838829247Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d3374678022d82759234913a106fbe11b79862f9660c0389e5a1ae91fccd69ce,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-244717,Uid:51fc709cdbf261d7f78621b653d0027b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116457717042837,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc709cdbf261d7f78621b653d0027b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.54:2379,kubernetes.io/config.hash: 51fc709cdbf261d7f78621b653d0027b,kubernetes.io/config.seen: 2025-12-19T03:54:16.875659211Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandb
ox{Id:cb79701937629a720cbd58b60479edfa60b769186b690e52241d588128c3a052,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-244717,Uid:3849b25e9ef521e7689e47039ae86b1a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116457711988331,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3849b25e9ef521e7689e47039ae86b1a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3849b25e9ef521e7689e47039ae86b1a,kubernetes.io/config.seen: 2025-12-19T03:54:16.842671617Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a4a72d74a0d7973721c035b2f02eb51fb3a397ecb63af2ca599f833e8e931d40,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-244717,Uid:42497a262dfe4f576d621089344401ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116457692186043,Labels:map[string]string{compon
ent: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42497a262dfe4f576d621089344401ac,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.54:8443,kubernetes.io/config.hash: 42497a262dfe4f576d621089344401ac,kubernetes.io/config.seen: 2025-12-19T03:54:16.842648132Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4c181a507e6b05a84a1966aebd53b5eeb271f50ebc1db5d936520d1e47685492,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-244717,Uid:bed550c60240cd3e16a8090bdf714aad,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116457687141065,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed5
50c60240cd3e16a8090bdf714aad,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bed550c60240cd3e16a8090bdf714aad,kubernetes.io/config.seen: 2025-12-19T03:54:16.842670027Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5ed96659-ab86-4326-82a1-32463086e738 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.937180545Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46cf69cd-2449-4881-bb0a-66bd1cf51557 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.937396503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46cf69cd-2449-4881-bb0a-66bd1cf51557 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.937659761Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f7f1d6992811a3f809cabb4d69d174028bcee28fdbabaface79c4622f2b40bd,PodSandboxId:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116493237678965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2c0887c51c89e0e584dc73353e522ffc4668546f521d1bb624d09f93ffb862,PodSandboxId:6cef58f979bdcf009b1664773464c0ad2edaf1a7e687c8d62dfee39025f1247f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116469616408924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5641715a-fb85-45c8-b1e2-de3c394086ed,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411653e4250dfe0da57038d14bc69022addd7ec37037fc8a8955e1655974a27,PodSandboxId:0614affd1728ae851d74f833d7d34176fd6ce8e8688ac8c4f9ee867519f3c2ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766116466143726495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9ptrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22226444-faa6-420d-a862-1ef0441a80e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:954447f0c968092e3dfb23f85b41f94836775c9f1bc23d1e84e6a62e0c4f6749,PodSandboxId:ed4e137eed69f1576ef7b849ef51524cd10889219acb0222c0ad3319f593ed6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766116462507695033,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8gvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 283607b2-9e6c-44f4-9c9d-7d713c71fb8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c00fb043f114c6d4ca21d53aa260dcba5ec8b805a2caba68166dab5cad8572,PodSandboxId:4c181a507e6b05a84a1966aebd53b5eeb271f50ebc1db5d936520d1e47685492,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766116457998299600,Labels:map[string]string{io.kubernetes.container.name: kube-con
troller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed550c60240cd3e16a8090bdf714aad,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e133fc618150f001c2cb9f53d2de00fb776e3bddc7a2b3672a3542025045aa71,PodSandboxId:d3374678022d82759234913a106fbe11b79862f9660c0389e5a1ae91fccd69ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b
1,State:CONTAINER_RUNNING,CreatedAt:1766116457981525503,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc709cdbf261d7f78621b653d0027b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e68b6704fdf3c1afa48de9036c1b5beeb789d7c238d995602002285db55f9c7,PodSandboxId:cb79701937629a720cbd58b60479edfa60b769186b690e52241d588128c3a052,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766116457963694981,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3849b25e9ef521e7689e47039ae86b1a,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1d9289f2c9d67ab5f9296e09735ae1579ea00b277c7b6055c4c99d35345cd39,PodSandboxId:a4a72d74a0d7973721c035b2f02eb51fb3a397ecb63af2ca599f833e8e931d40,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095
f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766116457957635365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42497a262dfe4f576d621089344401ac,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46cf69cd-2449-4881-bb0a-66bd1cf51557 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.949391335Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=adeb3849-4492-4bb8-ab22-5b8a796c8572 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.949458612Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=adeb3849-4492-4bb8-ab22-5b8a796c8572 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.950748349Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ed94708-1af9-4d57-996e-bd8de48ec233 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.951183563Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117372951157732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136931,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ed94708-1af9-4d57-996e-bd8de48ec233 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.952290368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33dd4e0b-5c17-469b-9f67-1370699c95a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.952337797Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33dd4e0b-5c17-469b-9f67-1370699c95a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.952506640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f7f1d6992811a3f809cabb4d69d174028bcee28fdbabaface79c4622f2b40bd,PodSandboxId:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116493237678965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2c0887c51c89e0e584dc73353e522ffc4668546f521d1bb624d09f93ffb862,PodSandboxId:6cef58f979bdcf009b1664773464c0ad2edaf1a7e687c8d62dfee39025f1247f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116469616408924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5641715a-fb85-45c8-b1e2-de3c394086ed,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411653e4250dfe0da57038d14bc69022addd7ec37037fc8a8955e1655974a27,PodSandboxId:0614affd1728ae851d74f833d7d34176fd6ce8e8688ac8c4f9ee867519f3c2ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766116466143726495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9ptrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22226444-faa6-420d-a862-1ef0441a80e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:954447f0c968092e3dfb23f85b41f94836775c9f1bc23d1e84e6a62e0c4f6749,PodSandboxId:ed4e137eed69f1576ef7b849ef51524cd10889219acb0222c0ad3319f593ed6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766116462507695033,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8gvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 283607b2-9e6c-44f4-9c9d-7d713c71fb8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc03b0b75719a3766a04010c15c50585e5b9d66412e39a40f987b194e653af8,PodSandboxId:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116462409761046,Labels:map[string]string{io.kubernetes.container.name: storage-provi
sioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c00fb043f114c6d4ca21d53aa260dcba5ec8b805a2caba68166dab5cad8572,PodSandboxId:4c181a507e6b05a84a1966aebd53b5eeb271f50ebc1db5d936520d1e47685492,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766116457998299600,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manag
er,io.kubernetes.pod.name: kube-controller-manager-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed550c60240cd3e16a8090bdf714aad,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e133fc618150f001c2cb9f53d2de00fb776e3bddc7a2b3672a3542025045aa71,PodSandboxId:d3374678022d82759234913a106fbe11b79862f9660c0389e5a1ae91fccd69ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTA
INER_RUNNING,CreatedAt:1766116457981525503,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc709cdbf261d7f78621b653d0027b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e68b6704fdf3c1afa48de9036c1b5beeb789d7c238d995602002285db55f9c7,PodSandboxId:cb79701937629a720cbd58b60479edfa60b769186b690e52241d588128c3a052,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766116457963694981,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3849b25e9ef521e7689e47039ae86b1a,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1d9289f2c9d67ab5f9296e09735ae1579ea00b277c7b6055c4c99d35345cd39,PodSandboxId:a4a72d74a0d7973721c035b2f02eb51fb3a397ecb63af2ca599f833e8e931d40,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3
d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766116457957635365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42497a262dfe4f576d621089344401ac,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33dd4e0b-5c17-469b-9f67-1370699c95a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.962349818Z" level=debug msg="Too many requests to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/1.7.0: sleeping for 2.000000 seconds before next attempt" file="docker/docker_client.go:596"
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.985736246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f152006-7b19-457f-8a9a-6e9044b654b0 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.986102178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f152006-7b19-457f-8a9a-6e9044b654b0 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.988087273Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5dee50d-6217-4d9a-9c5d-f9d84e9c64eb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.988675792Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117372988620083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136931,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5dee50d-6217-4d9a-9c5d-f9d84e9c64eb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.989852813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9584eecc-03bd-42c5-af23-5fc1d56a9eff name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.989979749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9584eecc-03bd-42c5-af23-5fc1d56a9eff name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:09:32 embed-certs-244717 crio[890]: time="2025-12-19 04:09:32.990151042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f7f1d6992811a3f809cabb4d69d174028bcee28fdbabaface79c4622f2b40bd,PodSandboxId:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116493237678965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2c0887c51c89e0e584dc73353e522ffc4668546f521d1bb624d09f93ffb862,PodSandboxId:6cef58f979bdcf009b1664773464c0ad2edaf1a7e687c8d62dfee39025f1247f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116469616408924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5641715a-fb85-45c8-b1e2-de3c394086ed,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411653e4250dfe0da57038d14bc69022addd7ec37037fc8a8955e1655974a27,PodSandboxId:0614affd1728ae851d74f833d7d34176fd6ce8e8688ac8c4f9ee867519f3c2ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766116466143726495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9ptrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22226444-faa6-420d-a862-1ef0441a80e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:954447f0c968092e3dfb23f85b41f94836775c9f1bc23d1e84e6a62e0c4f6749,PodSandboxId:ed4e137eed69f1576ef7b849ef51524cd10889219acb0222c0ad3319f593ed6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766116462507695033,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8gvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 283607b2-9e6c-44f4-9c9d-7d713c71fb8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc03b0b75719a3766a04010c15c50585e5b9d66412e39a40f987b194e653af8,PodSandboxId:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116462409761046,Labels:map[string]string{io.kubernetes.container.name: storage-provi
sioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c00fb043f114c6d4ca21d53aa260dcba5ec8b805a2caba68166dab5cad8572,PodSandboxId:4c181a507e6b05a84a1966aebd53b5eeb271f50ebc1db5d936520d1e47685492,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766116457998299600,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manag
er,io.kubernetes.pod.name: kube-controller-manager-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed550c60240cd3e16a8090bdf714aad,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e133fc618150f001c2cb9f53d2de00fb776e3bddc7a2b3672a3542025045aa71,PodSandboxId:d3374678022d82759234913a106fbe11b79862f9660c0389e5a1ae91fccd69ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTA
INER_RUNNING,CreatedAt:1766116457981525503,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc709cdbf261d7f78621b653d0027b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e68b6704fdf3c1afa48de9036c1b5beeb789d7c238d995602002285db55f9c7,PodSandboxId:cb79701937629a720cbd58b60479edfa60b769186b690e52241d588128c3a052,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766116457963694981,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3849b25e9ef521e7689e47039ae86b1a,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1d9289f2c9d67ab5f9296e09735ae1579ea00b277c7b6055c4c99d35345cd39,PodSandboxId:a4a72d74a0d7973721c035b2f02eb51fb3a397ecb63af2ca599f833e8e931d40,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3
d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766116457957635365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42497a262dfe4f576d621089344401ac,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9584eecc-03bd-42c5-af23-5fc1d56a9eff name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	7f7f1d6992811       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 minutes ago      Running             storage-provisioner       2                   fd26055d1fc31       storage-provisioner                          kube-system
	5e2c0887c51c8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   15 minutes ago      Running             busybox                   1                   6cef58f979bdc       busybox                                      default
	4411653e4250d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      15 minutes ago      Running             coredns                   1                   0614affd1728a       coredns-66bc5c9577-9ptrv                     kube-system
	954447f0c9680       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      15 minutes ago      Running             kube-proxy                1                   ed4e137eed69f       kube-proxy-p8gvm                             kube-system
	ffc03b0b75719       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      15 minutes ago      Exited              storage-provisioner       1                   fd26055d1fc31       storage-provisioner                          kube-system
	d5c00fb043f11       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      15 minutes ago      Running             kube-controller-manager   1                   4c181a507e6b0       kube-controller-manager-embed-certs-244717   kube-system
	e133fc618150f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      15 minutes ago      Running             etcd                      1                   d3374678022d8       etcd-embed-certs-244717                      kube-system
	2e68b6704fdf3       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      15 minutes ago      Running             kube-scheduler            1                   cb79701937629       kube-scheduler-embed-certs-244717            kube-system
	f1d9289f2c9d6       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      15 minutes ago      Running             kube-apiserver            1                   a4a72d74a0d79       kube-apiserver-embed-certs-244717            kube-system
	
	
	==> coredns [4411653e4250dfe0da57038d14bc69022addd7ec37037fc8a8955e1655974a27] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38615 - 52038 "HINFO IN 3058748700005490112.3296782353744935446. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032425758s
	
	
	==> describe nodes <==
	Name:               embed-certs-244717
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-244717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=embed-certs-244717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_51_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:51:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-244717
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 04:09:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 04:06:44 +0000   Fri, 19 Dec 2025 03:51:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 04:06:44 +0000   Fri, 19 Dec 2025 03:51:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 04:06:44 +0000   Fri, 19 Dec 2025 03:51:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 04:06:44 +0000   Fri, 19 Dec 2025 03:54:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.54
	  Hostname:    embed-certs-244717
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 a2c78c5e7dae44bfa155fa249ad61e2f
	  System UUID:                a2c78c5e-7dae-44bf-a155-fa249ad61e2f
	  Boot ID:                    f99a3e1d-0ea3-4c69-8edf-039724ce6d90
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-66bc5c9577-9ptrv                                 100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     17m
	  kube-system                 etcd-embed-certs-244717                                  100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         17m
	  kube-system                 kube-apiserver-embed-certs-244717                        250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-embed-certs-244717               200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-p8gvm                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-embed-certs-244717                        100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-746fcd58dc-x74d4                          100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         17m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kubernetes-dashboard        kubernetes-dashboard-api-677b969f5d-xr86s                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    15m
	  kubernetes-dashboard        kubernetes-dashboard-auth-6b55998857-99nts               100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    15m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-ghhd7                0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz    100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    15m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-7jhl7                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1250m (62%)   1 (50%)
	  memory             1170Mi (39%)  1770Mi (59%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-244717 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-244717 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node embed-certs-244717 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    17m                kubelet          Node embed-certs-244717 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m                kubelet          Node embed-certs-244717 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m                kubelet          Node embed-certs-244717 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Normal   NodeReady                17m                kubelet          Node embed-certs-244717 status is now: NodeReady
	  Normal   RegisteredNode           17m                node-controller  Node embed-certs-244717 event: Registered Node embed-certs-244717 in Controller
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-244717 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-244717 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-244717 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 15m                kubelet          Node embed-certs-244717 has been rebooted, boot id: f99a3e1d-0ea3-4c69-8edf-039724ce6d90
	  Normal   RegisteredNode           15m                node-controller  Node embed-certs-244717 event: Registered Node embed-certs-244717 in Controller
	
	
	==> dmesg <==
	[Dec19 03:53] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Dec19 03:54] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005578] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.701037] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000021] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.115440] kauditd_printk_skb: 88 callbacks suppressed
	[  +5.696306] kauditd_printk_skb: 196 callbacks suppressed
	[  +2.250665] kauditd_printk_skb: 275 callbacks suppressed
	[  +6.334318] kauditd_printk_skb: 203 callbacks suppressed
	[Dec19 03:55] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [e133fc618150f001c2cb9f53d2de00fb776e3bddc7a2b3672a3542025045aa71] <==
	{"level":"warn","ts":"2025-12-19T03:54:20.269990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:20.288549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:20.296944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:20.314207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:20.324286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:20.335410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:20.428023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.648552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.658219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.674896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.705739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.789793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.816072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.826726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.840338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.860416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.873869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.889471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.912791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59894","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T04:04:19.428662Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1080}
	{"level":"info","ts":"2025-12-19T04:04:19.452341Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1080,"took":"23.334454ms","hash":935815186,"current-db-size-bytes":4321280,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":1945600,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-12-19T04:04:19.452450Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":935815186,"revision":1080,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T04:09:19.435106Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1440}
	{"level":"info","ts":"2025-12-19T04:09:19.439184Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1440,"took":"3.70438ms","hash":1867608454,"current-db-size-bytes":4321280,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":2797568,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2025-12-19T04:09:19.439801Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1867608454,"revision":1440,"compact-revision":1080}
	
	
	==> kernel <==
	 04:09:33 up 15 min,  0 users,  load average: 0.28, 0.35, 0.23
	Linux embed-certs-244717 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [f1d9289f2c9d67ab5f9296e09735ae1579ea00b277c7b6055c4c99d35345cd39] <==
	E1219 04:05:22.189058       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 04:05:22.189115       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:07:22.188480       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:07:22.188591       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 04:07:22.188607       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:07:22.189602       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:07:22.189638       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 04:07:22.189674       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:09:21.191074       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:09:21.191319       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1219 04:09:22.192395       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:09:22.192501       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 04:09:22.192516       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:09:22.192615       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:09:22.192654       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 04:09:22.193830       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d5c00fb043f114c6d4ca21d53aa260dcba5ec8b805a2caba68166dab5cad8572] <==
	I1219 04:03:26.115315       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:03:56.045426       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:03:56.125615       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:04:26.052785       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:04:26.138540       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:04:56.058527       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:04:56.147146       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:05:26.064333       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:05:26.161956       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:05:56.069681       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:05:56.172087       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:06:26.076666       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:06:26.180720       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:06:56.081629       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:06:56.188959       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:07:26.087511       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:07:26.198412       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:07:56.092527       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:07:56.208400       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:08:26.097646       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:08:26.217990       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:08:56.103190       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:08:56.226539       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:09:26.110517       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:09:26.233946       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [954447f0c968092e3dfb23f85b41f94836775c9f1bc23d1e84e6a62e0c4f6749] <==
	I1219 03:54:22.909150       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:54:23.010209       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:54:23.010322       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.54"]
	E1219 03:54:23.010457       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:54:23.198968       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:54:23.199030       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:54:23.199071       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:54:23.287511       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:54:23.287871       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:54:23.287891       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:54:23.319002       1 config.go:309] "Starting node config controller"
	I1219 03:54:23.319040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:54:23.319052       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:54:23.328953       1 config.go:200] "Starting service config controller"
	I1219 03:54:23.328987       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:54:23.328996       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:54:23.329022       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:54:23.329027       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:54:23.329161       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:54:23.329185       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:54:23.429415       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:54:23.429426       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2e68b6704fdf3c1afa48de9036c1b5beeb789d7c238d995602002285db55f9c7] <==
	I1219 03:54:18.845327       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:54:21.102885       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:54:21.102919       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:54:21.102930       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:54:21.102936       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:54:21.242281       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:54:21.242765       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:54:21.247638       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:54:21.247732       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:54:21.248480       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:54:21.248625       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:54:21.348549       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 04:08:59 embed-certs-244717 kubelet[1246]: E1219 04:08:59.936195    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: reading manifest 1.7.0 in docker.io/kubernetesui/dashboard-web: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-7jhl7" podUID="9e539d4c-644f-4905-a4e7-222f6b6aa324"
	Dec 19 04:09:00 embed-certs-244717 kubelet[1246]: E1219 04:09:00.595723    1246 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2"
	Dec 19 04:09:00 embed-certs-244717 kubelet[1246]: E1219 04:09:00.595765    1246 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2"
	Dec 19 04:09:00 embed-certs-244717 kubelet[1246]: E1219 04:09:00.596021    1246 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard-metrics-scraper start failed in pod kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz_kubernetes-dashboard(aed4238c-131b-42c2-8c9f-f75f42efd32a): ErrImagePull: reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 19 04:09:00 embed-certs-244717 kubelet[1246]: E1219 04:09:00.596093    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz" podUID="aed4238c-131b-42c2-8c9f-f75f42efd32a"
	Dec 19 04:09:03 embed-certs-244717 kubelet[1246]: E1219 04:09:03.936775    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-x74d4" podUID="e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2"
	Dec 19 04:09:07 embed-certs-244717 kubelet[1246]: E1219 04:09:07.139136    1246 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117347138847052  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:09:07 embed-certs-244717 kubelet[1246]: E1219 04:09:07.139184    1246 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117347138847052  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:09:08 embed-certs-244717 kubelet[1246]: E1219 04:09:08.937200    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: reading manifest 3.9 in docker.io/library/kong: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-ghhd7" podUID="051c3643-370d-478b-a0d6-5012d03a4d3e"
	Dec 19 04:09:11 embed-certs-244717 kubelet[1246]: E1219 04:09:11.937014    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: reading manifest 1.14.0 in docker.io/kubernetesui/dashboard-api: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-677b969f5d-xr86s" podUID="2468eb14-0ebb-45fd-abf4-63a8e1309258"
	Dec 19 04:09:13 embed-certs-244717 kubelet[1246]: E1219 04:09:13.937196    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: reading manifest 1.7.0 in docker.io/kubernetesui/dashboard-web: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-7jhl7" podUID="9e539d4c-644f-4905-a4e7-222f6b6aa324"
	Dec 19 04:09:13 embed-certs-244717 kubelet[1246]: E1219 04:09:13.938030    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz" podUID="aed4238c-131b-42c2-8c9f-f75f42efd32a"
	Dec 19 04:09:15 embed-certs-244717 kubelet[1246]: E1219 04:09:15.936623    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-x74d4" podUID="e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2"
	Dec 19 04:09:17 embed-certs-244717 kubelet[1246]: E1219 04:09:17.140355    1246 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117357139960985  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:09:17 embed-certs-244717 kubelet[1246]: E1219 04:09:17.141584    1246 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117357139960985  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:09:21 embed-certs-244717 kubelet[1246]: E1219 04:09:21.937211    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: reading manifest 3.9 in docker.io/library/kong: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-ghhd7" podUID="051c3643-370d-478b-a0d6-5012d03a4d3e"
	Dec 19 04:09:24 embed-certs-244717 kubelet[1246]: E1219 04:09:24.940727    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: reading manifest 1.14.0 in docker.io/kubernetesui/dashboard-api: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-677b969f5d-xr86s" podUID="2468eb14-0ebb-45fd-abf4-63a8e1309258"
	Dec 19 04:09:27 embed-certs-244717 kubelet[1246]: E1219 04:09:27.143522    1246 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117367143172811  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:09:27 embed-certs-244717 kubelet[1246]: E1219 04:09:27.143548    1246 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117367143172811  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:09:27 embed-certs-244717 kubelet[1246]: E1219 04:09:27.937077    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz" podUID="aed4238c-131b-42c2-8c9f-f75f42efd32a"
	Dec 19 04:09:29 embed-certs-244717 kubelet[1246]: E1219 04:09:29.937936    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-x74d4" podUID="e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2"
	Dec 19 04:09:32 embed-certs-244717 kubelet[1246]: E1219 04:09:32.243948    1246 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard-auth:1.4.0"
	Dec 19 04:09:32 embed-certs-244717 kubelet[1246]: E1219 04:09:32.244003    1246 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard-auth:1.4.0"
	Dec 19 04:09:32 embed-certs-244717 kubelet[1246]: E1219 04:09:32.244328    1246 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard-auth start failed in pod kubernetes-dashboard-auth-6b55998857-99nts_kubernetes-dashboard(be79c314-fcbc-410f-a245-ca04752aeb23): ErrImagePull: reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 19 04:09:32 embed-certs-244717 kubelet[1246]: E1219 04:09:32.244371    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ErrImagePull: \"reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-6b55998857-99nts" podUID="be79c314-fcbc-410f-a245-ca04752aeb23"
	
	
	==> storage-provisioner [7f7f1d6992811a3f809cabb4d69d174028bcee28fdbabaface79c4622f2b40bd] <==
	W1219 04:09:09.085948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:11.089817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:11.095867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:13.099933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:13.105438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:15.109276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:15.114956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:17.118737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:17.123658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:19.126801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:19.131387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:21.134416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:21.142510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:23.145560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:23.150488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:25.154630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:25.161037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:27.164765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:27.170578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:29.175129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:29.180083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:31.183858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:31.188335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:33.194216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:33.206944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ffc03b0b75719a3766a04010c15c50585e5b9d66412e39a40f987b194e653af8] <==
	I1219 03:54:22.640920       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:54:52.653878       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-244717 -n embed-certs-244717
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-244717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-x74d4 kubernetes-dashboard-api-677b969f5d-xr86s kubernetes-dashboard-auth-6b55998857-99nts kubernetes-dashboard-kong-9849c64bd-ghhd7 kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz kubernetes-dashboard-web-5c9f966b98-7jhl7
helpers_test.go:283: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context embed-certs-244717 describe pod metrics-server-746fcd58dc-x74d4 kubernetes-dashboard-api-677b969f5d-xr86s kubernetes-dashboard-auth-6b55998857-99nts kubernetes-dashboard-kong-9849c64bd-ghhd7 kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz kubernetes-dashboard-web-5c9f966b98-7jhl7
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context embed-certs-244717 describe pod metrics-server-746fcd58dc-x74d4 kubernetes-dashboard-api-677b969f5d-xr86s kubernetes-dashboard-auth-6b55998857-99nts kubernetes-dashboard-kong-9849c64bd-ghhd7 kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz kubernetes-dashboard-web-5c9f966b98-7jhl7: exit status 1 (73.118387ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-x74d4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-api-677b969f5d-xr86s" not found
	Error from server (NotFound): pods "kubernetes-dashboard-auth-6b55998857-99nts" not found
	Error from server (NotFound): pods "kubernetes-dashboard-kong-9849c64bd-ghhd7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-web-5c9f966b98-7jhl7" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context embed-certs-244717 describe pod metrics-server-746fcd58dc-x74d4 kubernetes-dashboard-api-677b969f5d-xr86s kubernetes-dashboard-auth-6b55998857-99nts kubernetes-dashboard-kong-9849c64bd-ghhd7 kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz kubernetes-dashboard-web-5c9f966b98-7jhl7: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 04:01:19.090914    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:01:53.473615    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:02:18.385542    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:02:49.626399    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-168174 -n default-k8s-diff-port-168174
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-12-19 04:10:00.666949156 +0000 UTC m=+6303.705128468
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-168174 -n default-k8s-diff-port-168174
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-168174 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-168174 logs -n 25: (1.413699811s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ ssh     │ -p bridge-542624 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo containerd config dump                                                                                                                                                                                                │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo crio config                                                                                                                                                                                                           │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ delete  │ -p bridge-542624                                                                                                                                                                                                                            │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ delete  │ -p disable-driver-mounts-189846                                                                                                                                                                                                             │ disable-driver-mounts-189846 │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ start   │ -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-094166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ stop    │ -p old-k8s-version-094166 --alsologtostderr -v=3                                                                                                                                                                                            │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p no-preload-298059 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ stop    │ -p no-preload-298059 --alsologtostderr -v=3                                                                                                                                                                                                 │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p embed-certs-244717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                    │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:52 UTC │
	│ stop    │ -p embed-certs-244717 --alsologtostderr -v=3                                                                                                                                                                                                │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-168174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                          │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:52 UTC │
	│ stop    │ -p default-k8s-diff-port-168174 --alsologtostderr -v=3                                                                                                                                                                                      │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-094166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                           │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p old-k8s-version-094166 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable dashboard -p no-preload-298059 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p no-preload-298059 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-244717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p embed-certs-244717 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-168174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                     │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ start   │ -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 04:01 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:54:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:54:19.163618   56230 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:54:19.163755   56230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:54:19.163766   56230 out.go:374] Setting ErrFile to fd 2...
	I1219 03:54:19.163773   56230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:54:19.164086   56230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:54:19.164710   56230 out.go:368] Setting JSON to false
	I1219 03:54:19.166058   56230 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5803,"bootTime":1766110656,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:54:19.166138   56230 start.go:143] virtualization: kvm guest
	I1219 03:54:19.167819   56230 out.go:179] * [default-k8s-diff-port-168174] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:54:19.168806   56230 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:54:19.168798   56230 notify.go:221] Checking for updates...
	I1219 03:54:19.170649   56230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:54:19.171718   56230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:19.172800   56230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 03:54:19.173680   56230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:54:19.174607   56230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:54:19.176155   56230 config.go:182] Loaded profile config "default-k8s-diff-port-168174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:19.176843   56230 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:54:19.221795   56230 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 03:54:19.222673   56230 start.go:309] selected driver: kvm2
	I1219 03:54:19.222686   56230 start.go:928] validating driver "kvm2" against &{Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:19.222787   56230 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:54:19.223700   56230 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:54:19.223731   56230 cni.go:84] Creating CNI manager for ""
	I1219 03:54:19.223785   56230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:19.223821   56230 start.go:353] cluster config:
	{Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:19.223901   56230 iso.go:125] acquiring lock: {Name:mk42290af04a74a4dddf27fa33aac85c9bdccfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:54:19.225058   56230 out.go:179] * Starting "default-k8s-diff-port-168174" primary control-plane node in "default-k8s-diff-port-168174" cluster
	I1219 03:54:19.225891   56230 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:54:19.225925   56230 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:54:19.225937   56230 cache.go:65] Caching tarball of preloaded images
	I1219 03:54:19.226014   56230 preload.go:238] Found /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:54:19.226025   56230 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:54:19.226103   56230 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/config.json ...
	I1219 03:54:19.226379   56230 start.go:360] acquireMachinesLock for default-k8s-diff-port-168174: {Name:mk229398d29442d4d52885bbac963e5004bfbfba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:54:19.226434   56230 start.go:364] duration metric: took 34.138µs to acquireMachinesLock for "default-k8s-diff-port-168174"
	I1219 03:54:19.226446   56230 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:54:19.226451   56230 fix.go:54] fixHost starting: 
	I1219 03:54:19.228163   56230 fix.go:112] recreateIfNeeded on default-k8s-diff-port-168174: state=Stopped err=<nil>
	W1219 03:54:19.228180   56230 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:54:16.533332   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:17.359209   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:17.532886   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:18.033640   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:18.533499   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:19.033373   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:19.533624   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:20.033318   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:20.532932   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:21.032204   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:18.384127   55957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:18.420807   55957 api_server.go:72] duration metric: took 1.537508247s to wait for apiserver process to appear ...
	I1219 03:54:18.420840   55957 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:18.420862   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.071318   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:21.071349   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:21.071368   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.151121   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:21.151151   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:21.421632   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.426745   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:21.426773   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:21.921398   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.927340   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:21.927368   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:22.420988   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:22.428236   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 200:
	ok
	I1219 03:54:22.439161   55957 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:22.439190   55957 api_server.go:131] duration metric: took 4.018341977s to wait for apiserver health ...
	I1219 03:54:22.439202   55957 cni.go:84] Creating CNI manager for ""
	I1219 03:54:22.439211   55957 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:22.440712   55957 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:54:22.442679   55957 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:54:22.464908   55957 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:54:22.524765   55957 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:22.531030   55957 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:22.531082   55957 system_pods.go:61] "coredns-66bc5c9577-9ptrv" [22226444-faa6-420d-a862-1ef0441a80e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:22.531096   55957 system_pods.go:61] "etcd-embed-certs-244717" [24fc3b7f-cbce-4591-9d18-9859136e38c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:22.531109   55957 system_pods.go:61] "kube-apiserver-embed-certs-244717" [67d6bc2f-88f4-449a-a029-dc66d6ad1468] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:22.531117   55957 system_pods.go:61] "kube-controller-manager-embed-certs-244717" [f2be3fb7-4440-43a3-96ec-b2b241393a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:22.531126   55957 system_pods.go:61] "kube-proxy-p8gvm" [283607b2-9e6c-44f4-9c9d-7d713c71fb8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:22.531135   55957 system_pods.go:61] "kube-scheduler-embed-certs-244717" [243ff9cd-448c-4e21-98aa-032de9f329fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:22.531151   55957 system_pods.go:61] "metrics-server-746fcd58dc-x74d4" [e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:22.531159   55957 system_pods.go:61] "storage-provisioner" [99ff9c60-2f30-457a-8cb5-e030eb64a58e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:54:22.531169   55957 system_pods.go:74] duration metric: took 6.378453ms to wait for pod list to return data ...
	I1219 03:54:22.531184   55957 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:22.538334   55957 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:22.538361   55957 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:22.538378   55957 node_conditions.go:105] duration metric: took 7.188571ms to run NodePressure ...
	I1219 03:54:22.538434   55957 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:22.838171   55957 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1219 03:54:22.841979   55957 kubeadm.go:744] kubelet initialised
	I1219 03:54:22.842009   55957 kubeadm.go:745] duration metric: took 3.812738ms waiting for restarted kubelet to initialise ...
	I1219 03:54:22.842027   55957 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:54:22.858280   55957 ops.go:34] apiserver oom_adj: -16
	I1219 03:54:22.858296   55957 kubeadm.go:602] duration metric: took 8.274282939s to restartPrimaryControlPlane
	I1219 03:54:22.858304   55957 kubeadm.go:403] duration metric: took 8.332738451s to StartCluster
	I1219 03:54:22.858319   55957 settings.go:142] acquiring lock: {Name:mkb131693a40b9aa50c302192272518c3561c861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:22.858398   55957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:22.860091   55957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:22.860306   55957 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.54 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:54:22.860397   55957 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:54:22.860520   55957 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-244717"
	I1219 03:54:22.860540   55957 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-244717"
	W1219 03:54:22.860553   55957 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:54:22.860556   55957 addons.go:70] Setting default-storageclass=true in profile "embed-certs-244717"
	I1219 03:54:22.860588   55957 config.go:182] Loaded profile config "embed-certs-244717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:22.860638   55957 addons.go:70] Setting dashboard=true in profile "embed-certs-244717"
	I1219 03:54:22.860664   55957 addons.go:239] Setting addon dashboard=true in "embed-certs-244717"
	W1219 03:54:22.860674   55957 addons.go:248] addon dashboard should already be in state true
	I1219 03:54:22.860596   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.860698   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.860603   55957 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-244717"
	I1219 03:54:22.860613   55957 addons.go:70] Setting metrics-server=true in profile "embed-certs-244717"
	I1219 03:54:22.861202   55957 addons.go:239] Setting addon metrics-server=true in "embed-certs-244717"
	W1219 03:54:22.861219   55957 addons.go:248] addon metrics-server should already be in state true
	I1219 03:54:22.861243   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.861875   55957 out.go:179] * Verifying Kubernetes components...
	I1219 03:54:22.862820   55957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:22.863427   55957 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:22.863444   55957 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:54:22.864891   55957 addons.go:239] Setting addon default-storageclass=true in "embed-certs-244717"
	W1219 03:54:22.864914   55957 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:54:22.864935   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.866702   55957 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:22.866730   55957 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:54:22.866703   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.866913   55957 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:54:22.867359   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.867391   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.867616   55957 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:54:22.867638   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:22.868328   55957 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:22.868344   55957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:54:22.868968   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:54:22.869019   55957 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:54:22.870937   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.871717   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.871748   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.871986   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:22.872790   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873111   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873212   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.873235   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873423   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:22.873635   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.873666   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873832   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:23.104462   55957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:23.139781   55957 node_ready.go:35] waiting up to 6m0s for node "embed-certs-244717" to be "Ready" ...
	I1219 03:54:19.229464   56230 out.go:252] * Restarting existing kvm2 VM for "default-k8s-diff-port-168174" ...
	I1219 03:54:19.229501   56230 main.go:144] libmachine: starting domain...
	I1219 03:54:19.229509   56230 main.go:144] libmachine: ensuring networks are active...
	I1219 03:54:19.230233   56230 main.go:144] libmachine: Ensuring network default is active
	I1219 03:54:19.230721   56230 main.go:144] libmachine: Ensuring network mk-default-k8s-diff-port-168174 is active
	I1219 03:54:19.231248   56230 main.go:144] libmachine: getting domain XML...
	I1219 03:54:19.232369   56230 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>default-k8s-diff-port-168174</name>
	  <uuid>5503b0a8-1398-475d-b625-563c5bc2d168</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/default-k8s-diff-port-168174.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:d9:97:a2'/>
	      <source network='mk-default-k8s-diff-port-168174'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:3f:9e:c8'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 03:54:20.662520   56230 main.go:144] libmachine: waiting for domain to start...
	I1219 03:54:20.663943   56230 main.go:144] libmachine: domain is now running
	I1219 03:54:20.663969   56230 main.go:144] libmachine: waiting for IP...
	I1219 03:54:20.664770   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.665467   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has current primary IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.665481   56230 main.go:144] libmachine: found domain IP: 192.168.50.68
	I1219 03:54:20.665486   56230 main.go:144] libmachine: reserving static IP address...
	I1219 03:54:20.665943   56230 main.go:144] libmachine: found host DHCP lease matching {name: "default-k8s-diff-port-168174", mac: "52:54:00:d9:97:a2", ip: "192.168.50.68"} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:51:35 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:20.665989   56230 main.go:144] libmachine: skip adding static IP to network mk-default-k8s-diff-port-168174 - found existing host DHCP lease matching {name: "default-k8s-diff-port-168174", mac: "52:54:00:d9:97:a2", ip: "192.168.50.68"}
	I1219 03:54:20.666003   56230 main.go:144] libmachine: reserved static IP address 192.168.50.68 for domain default-k8s-diff-port-168174
	I1219 03:54:20.666019   56230 main.go:144] libmachine: waiting for SSH...
	I1219 03:54:20.666027   56230 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 03:54:20.668799   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.669225   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:51:35 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:20.669267   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.669495   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:20.669789   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:20.669805   56230 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 03:54:23.725788   56230 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.50.68:22: connect: no route to host
	I1219 03:54:21.532614   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:22.032788   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:22.532959   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:23.032773   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:23.531977   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:24.033500   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:24.532177   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:25.033441   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:25.533482   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:26.031758   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:23.198551   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:23.404667   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:23.420466   55957 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:54:23.445604   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:54:23.445631   55957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:54:23.525300   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:54:23.525326   55957 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:54:23.593759   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:23.593784   55957 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:54:23.645141   55957 node_ready.go:49] node "embed-certs-244717" is "Ready"
	I1219 03:54:23.645171   55957 node_ready.go:38] duration metric: took 505.352434ms for node "embed-certs-244717" to be "Ready" ...
	I1219 03:54:23.645183   55957 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:54:23.645241   55957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:23.652800   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:24.781529   55957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.376827148s)
	I1219 03:54:24.781591   55957 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.361072264s)
	I1219 03:54:24.781616   55957 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.136359787s)
	I1219 03:54:24.781638   55957 api_server.go:72] duration metric: took 1.9213054s to wait for apiserver process to appear ...
	I1219 03:54:24.781645   55957 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:24.781662   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:24.781671   55957 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:54:24.791019   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 200:
	ok
	I1219 03:54:24.791945   55957 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:24.791970   55957 api_server.go:131] duration metric: took 10.31791ms to wait for apiserver health ...
	I1219 03:54:24.791980   55957 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:24.795539   55957 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:24.795599   55957 system_pods.go:61] "coredns-66bc5c9577-9ptrv" [22226444-faa6-420d-a862-1ef0441a80e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:24.795612   55957 system_pods.go:61] "etcd-embed-certs-244717" [24fc3b7f-cbce-4591-9d18-9859136e38c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:24.795627   55957 system_pods.go:61] "kube-apiserver-embed-certs-244717" [67d6bc2f-88f4-449a-a029-dc66d6ad1468] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:24.795638   55957 system_pods.go:61] "kube-controller-manager-embed-certs-244717" [f2be3fb7-4440-43a3-96ec-b2b241393a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:24.795644   55957 system_pods.go:61] "kube-proxy-p8gvm" [283607b2-9e6c-44f4-9c9d-7d713c71fb8c] Running
	I1219 03:54:24.795655   55957 system_pods.go:61] "kube-scheduler-embed-certs-244717" [243ff9cd-448c-4e21-98aa-032de9f329fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:24.795666   55957 system_pods.go:61] "metrics-server-746fcd58dc-x74d4" [e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:24.795671   55957 system_pods.go:61] "storage-provisioner" [99ff9c60-2f30-457a-8cb5-e030eb64a58e] Running
	I1219 03:54:24.795683   55957 system_pods.go:74] duration metric: took 3.696303ms to wait for pod list to return data ...
	I1219 03:54:24.795694   55957 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:54:24.797860   55957 default_sa.go:45] found service account: "default"
	I1219 03:54:24.797884   55957 default_sa.go:55] duration metric: took 2.181869ms for default service account to be created ...
	I1219 03:54:24.797895   55957 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:54:24.800212   55957 system_pods.go:86] 8 kube-system pods found
	I1219 03:54:24.800242   55957 system_pods.go:89] "coredns-66bc5c9577-9ptrv" [22226444-faa6-420d-a862-1ef0441a80e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:24.800255   55957 system_pods.go:89] "etcd-embed-certs-244717" [24fc3b7f-cbce-4591-9d18-9859136e38c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:24.800267   55957 system_pods.go:89] "kube-apiserver-embed-certs-244717" [67d6bc2f-88f4-449a-a029-dc66d6ad1468] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:24.800277   55957 system_pods.go:89] "kube-controller-manager-embed-certs-244717" [f2be3fb7-4440-43a3-96ec-b2b241393a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:24.800283   55957 system_pods.go:89] "kube-proxy-p8gvm" [283607b2-9e6c-44f4-9c9d-7d713c71fb8c] Running
	I1219 03:54:24.800291   55957 system_pods.go:89] "kube-scheduler-embed-certs-244717" [243ff9cd-448c-4e21-98aa-032de9f329fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:24.800300   55957 system_pods.go:89] "metrics-server-746fcd58dc-x74d4" [e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:24.800307   55957 system_pods.go:89] "storage-provisioner" [99ff9c60-2f30-457a-8cb5-e030eb64a58e] Running
	I1219 03:54:24.800317   55957 system_pods.go:126] duration metric: took 2.415918ms to wait for k8s-apps to be running ...
	I1219 03:54:24.800326   55957 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:54:24.800389   55957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:54:24.901954   55957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.249113047s)
	I1219 03:54:24.901997   55957 addons.go:500] Verifying addon metrics-server=true in "embed-certs-244717"
	I1219 03:54:24.902043   55957 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:54:24.902053   55957 system_svc.go:56] duration metric: took 101.72157ms WaitForService to wait for kubelet
	I1219 03:54:24.902083   55957 kubeadm.go:587] duration metric: took 2.041739112s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:54:24.902106   55957 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:24.912597   55957 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:24.912623   55957 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:24.912638   55957 node_conditions.go:105] duration metric: took 10.525951ms to run NodePressure ...
	I1219 03:54:24.912652   55957 start.go:242] waiting for startup goroutines ...
	I1219 03:54:25.801998   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:54:29.507152   55957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.70510669s)
	I1219 03:54:29.507259   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:29.992247   55957 addons.go:500] Verifying addon dashboard=true in "embed-certs-244717"
	I1219 03:54:29.995517   55957 out.go:179] * Verifying dashboard addon...
	I1219 03:54:26.531479   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:27.031454   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:27.532215   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:28.032964   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:28.532268   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.032253   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.533154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:30.032379   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:30.532853   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:31.032643   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.998065   55957 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:54:30.003541   55957 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:54:30.003561   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:30.510371   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:31.003319   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:31.502854   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.002809   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.503083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.001709   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.805953   56230 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.50.68:22: connect: no route to host
	I1219 03:54:32.806901   56230 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.50.68:22: connect: connection refused
	I1219 03:54:31.531396   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.033946   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.532063   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.033088   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.532601   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.032154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.532734   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.031403   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.532231   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:36.031798   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.501421   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.001823   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.501944   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.001242   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.502033   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:36.001834   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:36.503279   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.002832   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.501859   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.002218   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.914133   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:54:35.917629   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.918062   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:35.918084   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.918331   56230 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/config.json ...
	I1219 03:54:35.918603   56230 machine.go:94] provisionDockerMachine start ...
	I1219 03:54:35.921009   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.921341   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:35.921380   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.921581   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:35.921797   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:35.921810   56230 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:54:36.027619   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 03:54:36.027644   56230 buildroot.go:166] provisioning hostname "default-k8s-diff-port-168174"
	I1219 03:54:36.030973   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.031540   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.031597   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.031855   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:36.032105   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:36.032121   56230 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-168174 && echo "default-k8s-diff-port-168174" | sudo tee /etc/hostname
	I1219 03:54:36.154920   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-168174
	
	I1219 03:54:36.157818   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.158270   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.158298   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.158481   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:36.158705   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:36.158721   56230 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-168174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-168174/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-168174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:54:36.278763   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:54:36.278793   56230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5010/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5010/.minikube}
	I1219 03:54:36.278815   56230 buildroot.go:174] setting up certificates
	I1219 03:54:36.278825   56230 provision.go:84] configureAuth start
	I1219 03:54:36.282034   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.282595   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.282631   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.285039   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.285396   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.285421   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.285558   56230 provision.go:143] copyHostCerts
	I1219 03:54:36.285634   56230 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem, removing ...
	I1219 03:54:36.285655   56230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem
	I1219 03:54:36.285732   56230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem (1082 bytes)
	I1219 03:54:36.285873   56230 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem, removing ...
	I1219 03:54:36.285889   56230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem
	I1219 03:54:36.285939   56230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem (1123 bytes)
	I1219 03:54:36.286034   56230 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem, removing ...
	I1219 03:54:36.286044   56230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem
	I1219 03:54:36.286086   56230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem (1679 bytes)
	I1219 03:54:36.286187   56230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-168174 san=[127.0.0.1 192.168.50.68 default-k8s-diff-port-168174 localhost minikube]
	I1219 03:54:36.425832   56230 provision.go:177] copyRemoteCerts
	I1219 03:54:36.425892   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:54:36.428255   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.428656   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.428686   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.428839   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:36.519020   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 03:54:36.558591   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:54:36.592448   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:54:36.618754   56230 provision.go:87] duration metric: took 339.918165ms to configureAuth
	I1219 03:54:36.618782   56230 buildroot.go:189] setting minikube options for container-runtime
	I1219 03:54:36.618965   56230 config.go:182] Loaded profile config "default-k8s-diff-port-168174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:36.622080   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.622643   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.622690   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.622932   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:36.623146   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:36.623170   56230 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:54:36.870072   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:54:36.870099   56230 machine.go:97] duration metric: took 951.477635ms to provisionDockerMachine
	I1219 03:54:36.870113   56230 start.go:293] postStartSetup for "default-k8s-diff-port-168174" (driver="kvm2")
	I1219 03:54:36.870125   56230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:54:36.870211   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:54:36.873360   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.873824   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.873854   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.873997   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:36.957455   56230 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:54:36.962098   56230 info.go:137] Remote host: Buildroot 2025.02
	I1219 03:54:36.962123   56230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/addons for local assets ...
	I1219 03:54:36.962187   56230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/files for local assets ...
	I1219 03:54:36.962258   56230 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem -> 89372.pem in /etc/ssl/certs
	I1219 03:54:36.962365   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:54:36.973208   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /etc/ssl/certs/89372.pem (1708 bytes)
	I1219 03:54:37.001535   56230 start.go:296] duration metric: took 131.409863ms for postStartSetup
	I1219 03:54:37.001590   56230 fix.go:56] duration metric: took 17.775113489s for fixHost
	I1219 03:54:37.004880   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.005287   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.005312   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.005528   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:37.005820   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:37.005839   56230 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 03:54:37.113597   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766116477.079572846
	
	I1219 03:54:37.113621   56230 fix.go:216] guest clock: 1766116477.079572846
	I1219 03:54:37.113630   56230 fix.go:229] Guest: 2025-12-19 03:54:37.079572846 +0000 UTC Remote: 2025-12-19 03:54:37.001596336 +0000 UTC m=+17.891500693 (delta=77.97651ms)
	I1219 03:54:37.113645   56230 fix.go:200] guest clock delta is within tolerance: 77.97651ms
	I1219 03:54:37.113651   56230 start.go:83] releasing machines lock for "default-k8s-diff-port-168174", held for 17.887209269s
	I1219 03:54:37.116322   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.116867   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.116898   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.117549   56230 ssh_runner.go:195] Run: cat /version.json
	I1219 03:54:37.117645   56230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:54:37.121299   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.121532   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.121841   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.121885   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.122114   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:37.122168   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.122203   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.122439   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:37.200188   56230 ssh_runner.go:195] Run: systemctl --version
	I1219 03:54:37.236006   56230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:54:37.382400   56230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:54:37.391093   56230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:54:37.391172   56230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:54:37.412549   56230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:54:37.412595   56230 start.go:496] detecting cgroup driver to use...
	I1219 03:54:37.412701   56230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:54:37.432292   56230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:54:37.448705   56230 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:54:37.448757   56230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:54:37.464885   56230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:54:37.488524   56230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:54:37.648374   56230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:54:37.863271   56230 docker.go:234] disabling docker service ...
	I1219 03:54:37.863333   56230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:54:37.880285   56230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:54:37.895631   56230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:54:38.053642   56230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:54:38.210829   56230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:54:38.227130   56230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:54:38.248699   56230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:54:38.248763   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.260875   56230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1219 03:54:38.260939   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.273032   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.284839   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.296706   56230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:54:38.309100   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.320373   56230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.343213   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.355251   56230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:54:38.366693   56230 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 03:54:38.366745   56230 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 03:54:38.386325   56230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:54:38.397641   56230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:38.542778   56230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:54:38.656266   56230 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:54:38.656354   56230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:54:38.662225   56230 start.go:564] Will wait 60s for crictl version
	I1219 03:54:38.662286   56230 ssh_runner.go:195] Run: which crictl
	I1219 03:54:38.666072   56230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 03:54:38.702242   56230 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1219 03:54:38.702324   56230 ssh_runner.go:195] Run: crio --version
	I1219 03:54:38.730733   56230 ssh_runner.go:195] Run: crio --version
	I1219 03:54:38.760806   56230 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1219 03:54:38.764622   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:38.765017   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:38.765041   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:38.765207   56230 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1219 03:54:38.769555   56230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:54:38.784218   56230 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:54:38.784318   56230 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:54:38.784389   56230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:38.817654   56230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1219 03:54:38.817721   56230 ssh_runner.go:195] Run: which lz4
	I1219 03:54:38.821795   56230 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1219 03:54:38.826295   56230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1219 03:54:38.826327   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
	I1219 03:54:36.531538   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.031367   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.531677   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.031134   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.532312   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.032552   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.532678   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.031267   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.531858   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:41.032379   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.502453   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.002949   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.502152   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.002580   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.501440   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:41.002612   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:41.501822   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.002247   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.502196   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.002641   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.045060   56230 crio.go:462] duration metric: took 1.223302426s to copy over tarball
	I1219 03:54:40.045121   56230 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1219 03:54:41.702628   56230 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.657483082s)
	I1219 03:54:41.702653   56230 crio.go:469] duration metric: took 1.657571319s to extract the tarball
	I1219 03:54:41.702661   56230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1219 03:54:41.742396   56230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:41.778250   56230 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:54:41.778274   56230 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:54:41.778281   56230 kubeadm.go:935] updating node { 192.168.50.68 8444 v1.34.3 crio true true} ...
	I1219 03:54:41.778393   56230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-168174 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:54:41.778466   56230 ssh_runner.go:195] Run: crio config
	I1219 03:54:41.824084   56230 cni.go:84] Creating CNI manager for ""
	I1219 03:54:41.824114   56230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:41.824134   56230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:54:41.824161   56230 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.68 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-168174 NodeName:default-k8s-diff-port-168174 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:54:41.824332   56230 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.68
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-168174"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.68"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.68"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:54:41.824436   56230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:54:41.838181   56230 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:54:41.838263   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:54:41.850122   56230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1219 03:54:41.871647   56230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:54:41.891031   56230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1219 03:54:41.910970   56230 ssh_runner.go:195] Run: grep 192.168.50.68	control-plane.minikube.internal$ /etc/hosts
	I1219 03:54:41.915265   56230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.68	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:54:41.929042   56230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:42.077837   56230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:42.111492   56230 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174 for IP: 192.168.50.68
	I1219 03:54:42.111515   56230 certs.go:195] generating shared ca certs ...
	I1219 03:54:42.111529   56230 certs.go:227] acquiring lock for ca certs: {Name:mk433b742ffd55233764aebe3c6f298e0d1f02ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:42.111713   56230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key
	I1219 03:54:42.111782   56230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key
	I1219 03:54:42.111804   56230 certs.go:257] generating profile certs ...
	I1219 03:54:42.111942   56230 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/client.key
	I1219 03:54:42.112027   56230 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/apiserver.key.ed8a7a08
	I1219 03:54:42.112078   56230 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/proxy-client.key
	I1219 03:54:42.112201   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem (1338 bytes)
	W1219 03:54:42.112240   56230 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937_empty.pem, impossibly tiny 0 bytes
	I1219 03:54:42.112252   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:54:42.112280   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem (1082 bytes)
	I1219 03:54:42.112309   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:54:42.112361   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem (1679 bytes)
	I1219 03:54:42.112423   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem (1708 bytes)
	I1219 03:54:42.113420   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:54:42.154291   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:54:42.194006   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:54:42.221732   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:54:42.253007   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:54:42.280935   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1219 03:54:42.315083   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:54:42.342426   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:54:42.371444   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:54:42.402350   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem --> /usr/share/ca-certificates/8937.pem (1338 bytes)
	I1219 03:54:42.430533   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /usr/share/ca-certificates/89372.pem (1708 bytes)
	I1219 03:54:42.462798   56230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:54:42.483977   56230 ssh_runner.go:195] Run: openssl version
	I1219 03:54:42.490839   56230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.503565   56230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89372.pem /etc/ssl/certs/89372.pem
	I1219 03:54:42.514852   56230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.520693   56230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:46 /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.520739   56230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.528108   56230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:54:42.539720   56230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89372.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:54:42.550915   56230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.561679   56230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:54:42.572526   56230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.577725   56230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.577781   56230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.584786   56230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:54:42.596115   56230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:54:42.607332   56230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.618682   56230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8937.pem /etc/ssl/certs/8937.pem
	I1219 03:54:42.630292   56230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.635409   56230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:46 /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.635452   56230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.642710   56230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:54:42.654104   56230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8937.pem /etc/ssl/certs/51391683.0
	I1219 03:54:42.666207   56230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:54:42.671385   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:54:42.678373   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:54:42.685534   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:54:42.692140   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:54:42.698549   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:54:42.705279   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:54:42.712285   56230 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:42.712383   56230 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:54:42.712433   56230 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:54:42.745951   56230 cri.go:92] found id: ""
	I1219 03:54:42.746000   56230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:54:42.757185   56230 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:54:42.757201   56230 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:54:42.757240   56230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:54:42.768155   56230 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:54:42.769156   56230 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-168174" does not appear in /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:42.769826   56230 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-5010/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-168174" cluster setting kubeconfig missing "default-k8s-diff-port-168174" context setting]
	I1219 03:54:42.770666   56230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:42.772207   56230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:54:42.782776   56230 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.50.68
	I1219 03:54:42.782799   56230 kubeadm.go:1161] stopping kube-system containers ...
	I1219 03:54:42.782811   56230 cri.go:57] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1219 03:54:42.782853   56230 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:54:42.827373   56230 cri.go:92] found id: ""
	I1219 03:54:42.827451   56230 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1219 03:54:42.855644   56230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:54:42.867640   56230 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:54:42.867664   56230 kubeadm.go:158] found existing configuration files:
	
	I1219 03:54:42.867713   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1219 03:54:42.879242   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:54:42.879345   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:54:42.890737   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1219 03:54:42.900979   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:54:42.901033   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:54:42.911989   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1219 03:54:42.922081   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:54:42.922121   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:54:42.933197   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1219 03:54:42.943650   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:54:42.943706   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:54:42.954819   56230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:54:42.965503   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:43.022499   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:41.533216   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.031785   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.531762   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.032044   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.531965   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.032348   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.532701   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.032707   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.531729   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:46.032031   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.502226   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.002160   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.502401   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.002719   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.502332   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:46.001536   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:46.501990   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.002547   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.501403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:48.002631   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.652743   56230 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.630210852s)
	I1219 03:54:44.652817   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:44.912221   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:44.996004   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:45.067644   56230 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:54:45.067725   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:45.568080   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:46.068722   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:46.568114   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:47.068013   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:47.117129   56230 api_server.go:72] duration metric: took 2.049494189s to wait for apiserver process to appear ...
	I1219 03:54:47.117153   56230 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:47.117174   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:47.117680   56230 api_server.go:269] stopped: https://192.168.50.68:8444/healthz: Get "https://192.168.50.68:8444/healthz": dial tcp 192.168.50.68:8444: connect: connection refused
	I1219 03:54:47.617323   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:46.534635   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.031616   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.531182   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:48.032359   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:48.532986   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.031214   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.532385   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.032130   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.532478   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.031638   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.988621   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:49.988647   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:49.988661   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:50.015383   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:50.015404   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:50.117699   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:50.129872   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:50.129895   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:50.617488   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:50.622220   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:50.622255   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:51.117929   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:51.126710   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:51.126741   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:51.617345   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:51.622349   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 200:
	ok
	I1219 03:54:51.628913   56230 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:51.628947   56230 api_server.go:131] duration metric: took 4.511785965s to wait for apiserver health ...
	I1219 03:54:51.628957   56230 cni.go:84] Creating CNI manager for ""
	I1219 03:54:51.628965   56230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:51.630494   56230 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:54:51.631426   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:54:51.647385   56230 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:54:51.669320   56230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:51.675232   56230 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:51.675273   56230 system_pods.go:61] "coredns-66bc5c9577-dnfcc" [b8ee66a7-b129-4499-aad8-a988ecea241c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:51.675288   56230 system_pods.go:61] "etcd-default-k8s-diff-port-168174" [af7e1768-95b9-431e-877a-63138a91dcdc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:51.675298   56230 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-168174" [a59f79b0-b99b-4092-8b4a-773ff0a04569] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:51.675318   56230 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-168174" [405651c9-cdc7-414f-a512-f11b7b580eb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:51.675328   56230 system_pods.go:61] "kube-proxy-zs4wg" [2212782c-32ab-4355-8dda-9117953b0223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:51.675338   56230 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-168174" [a524e706-e546-4aa4-848c-f52eb802ffb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:51.675347   56230 system_pods.go:61] "metrics-server-746fcd58dc-xjkbx" [c6e2f2b2-7b94-4ff2-85ba-e79d72b30655] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:51.675358   56230 system_pods.go:61] "storage-provisioner" [ec1d1888-a950-48d5-9b73-440e7556818b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:54:51.675366   56230 system_pods.go:74] duration metric: took 6.023523ms to wait for pod list to return data ...
	I1219 03:54:51.675387   56230 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:51.680456   56230 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:51.680483   56230 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:51.680500   56230 node_conditions.go:105] duration metric: took 5.106096ms to run NodePressure ...
	I1219 03:54:51.680558   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:51.941503   56230 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1219 03:54:51.945528   56230 kubeadm.go:744] kubelet initialised
	I1219 03:54:51.945566   56230 kubeadm.go:745] duration metric: took 4.028139ms waiting for restarted kubelet to initialise ...
	I1219 03:54:51.945597   56230 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:54:51.967660   56230 ops.go:34] apiserver oom_adj: -16
	I1219 03:54:51.967680   56230 kubeadm.go:602] duration metric: took 9.210474475s to restartPrimaryControlPlane
	I1219 03:54:51.967689   56230 kubeadm.go:403] duration metric: took 9.255411647s to StartCluster
	I1219 03:54:51.967705   56230 settings.go:142] acquiring lock: {Name:mkb131693a40b9aa50c302192272518c3561c861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:51.967787   56230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:51.970216   56230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:51.970558   56230 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:54:51.970693   56230 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:54:51.970789   56230 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970812   56230 config.go:182] Loaded profile config "default-k8s-diff-port-168174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:51.970826   56230 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-168174"
	I1219 03:54:51.970825   56230 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970846   56230 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970858   56230 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-168174"
	I1219 03:54:51.970858   56230 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970884   56230 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-168174"
	W1219 03:54:51.970893   56230 addons.go:248] addon dashboard should already be in state true
	I1219 03:54:51.970919   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	W1219 03:54:51.970836   56230 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:54:51.970978   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	I1219 03:54:51.970861   56230 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-168174"
	W1219 03:54:51.971035   56230 addons.go:248] addon metrics-server should already be in state true
	I1219 03:54:51.971057   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	I1219 03:54:51.971960   56230 out.go:179] * Verifying Kubernetes components...
	I1219 03:54:51.973008   56230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:51.974650   56230 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:51.974726   56230 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:54:51.974952   56230 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:54:51.975006   56230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:54:48.502712   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.001711   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.501592   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.001601   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.501313   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.002296   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.502360   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.002651   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.503108   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.002165   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.975433   56230 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-168174"
	W1219 03:54:51.975454   56230 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:54:51.975493   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	I1219 03:54:51.975992   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:54:51.976010   56230 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:54:51.976037   56230 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:51.976049   56230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:54:51.978029   56230 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:51.978047   56230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:54:51.979030   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.979580   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.979617   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.979992   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:51.980624   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.980627   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981054   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.981088   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981091   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.981123   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981299   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:51.981430   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981442   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:51.981908   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.981931   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.982118   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:52.329267   56230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:52.362110   56230 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-168174" to be "Ready" ...
	I1219 03:54:52.365712   56230 node_ready.go:49] node "default-k8s-diff-port-168174" is "Ready"
	I1219 03:54:52.365740   56230 node_ready.go:38] duration metric: took 3.595186ms for node "default-k8s-diff-port-168174" to be "Ready" ...
	I1219 03:54:52.365758   56230 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:54:52.365821   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:52.390728   56230 api_server.go:72] duration metric: took 420.108978ms to wait for apiserver process to appear ...
	I1219 03:54:52.390759   56230 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:52.390781   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:52.397481   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 200:
	ok
	I1219 03:54:52.398595   56230 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:52.398619   56230 api_server.go:131] duration metric: took 7.851716ms to wait for apiserver health ...
	I1219 03:54:52.398634   56230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:52.403556   56230 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:52.403621   56230 system_pods.go:61] "coredns-66bc5c9577-dnfcc" [b8ee66a7-b129-4499-aad8-a988ecea241c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:52.403638   56230 system_pods.go:61] "etcd-default-k8s-diff-port-168174" [af7e1768-95b9-431e-877a-63138a91dcdc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:52.403653   56230 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-168174" [a59f79b0-b99b-4092-8b4a-773ff0a04569] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:52.403664   56230 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-168174" [405651c9-cdc7-414f-a512-f11b7b580eb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:52.403676   56230 system_pods.go:61] "kube-proxy-zs4wg" [2212782c-32ab-4355-8dda-9117953b0223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:52.403690   56230 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-168174" [a524e706-e546-4aa4-848c-f52eb802ffb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:52.403705   56230 system_pods.go:61] "metrics-server-746fcd58dc-xjkbx" [c6e2f2b2-7b94-4ff2-85ba-e79d72b30655] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:52.403714   56230 system_pods.go:61] "storage-provisioner" [ec1d1888-a950-48d5-9b73-440e7556818b] Running
	I1219 03:54:52.403725   56230 system_pods.go:74] duration metric: took 5.080532ms to wait for pod list to return data ...
	I1219 03:54:52.403737   56230 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:54:52.406964   56230 default_sa.go:45] found service account: "default"
	I1219 03:54:52.406989   56230 default_sa.go:55] duration metric: took 3.241415ms for default service account to be created ...
	I1219 03:54:52.406999   56230 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:54:52.412763   56230 system_pods.go:86] 8 kube-system pods found
	I1219 03:54:52.412787   56230 system_pods.go:89] "coredns-66bc5c9577-dnfcc" [b8ee66a7-b129-4499-aad8-a988ecea241c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:52.412797   56230 system_pods.go:89] "etcd-default-k8s-diff-port-168174" [af7e1768-95b9-431e-877a-63138a91dcdc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:52.412804   56230 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-168174" [a59f79b0-b99b-4092-8b4a-773ff0a04569] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:52.412810   56230 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-168174" [405651c9-cdc7-414f-a512-f11b7b580eb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:52.412816   56230 system_pods.go:89] "kube-proxy-zs4wg" [2212782c-32ab-4355-8dda-9117953b0223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:52.412821   56230 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-168174" [a524e706-e546-4aa4-848c-f52eb802ffb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:52.412826   56230 system_pods.go:89] "metrics-server-746fcd58dc-xjkbx" [c6e2f2b2-7b94-4ff2-85ba-e79d72b30655] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:52.412830   56230 system_pods.go:89] "storage-provisioner" [ec1d1888-a950-48d5-9b73-440e7556818b] Running
	I1219 03:54:52.412837   56230 system_pods.go:126] duration metric: took 5.832618ms to wait for k8s-apps to be running ...
	I1219 03:54:52.412847   56230 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:54:52.412890   56230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:54:52.437131   56230 system_svc.go:56] duration metric: took 24.267658ms WaitForService to wait for kubelet
	I1219 03:54:52.437166   56230 kubeadm.go:587] duration metric: took 466.551246ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:54:52.437188   56230 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:52.440753   56230 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:52.440776   56230 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:52.440789   56230 node_conditions.go:105] duration metric: took 3.595658ms to run NodePressure ...
	I1219 03:54:52.440804   56230 start.go:242] waiting for startup goroutines ...
	I1219 03:54:52.571235   56230 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:54:52.579720   56230 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:54:52.588696   56230 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:54:52.607999   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:52.623079   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:54:52.623103   56230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:54:52.632201   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:52.689775   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:54:52.689802   56230 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:54:52.755241   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:52.755280   56230 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:54:52.860818   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:51.531836   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.032945   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.532771   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.031681   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.532510   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.032369   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.532915   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.031905   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.531152   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.032011   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.502165   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.002813   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.501582   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.002986   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.501711   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.000984   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.502399   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.002200   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.502369   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.002000   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.655285   56230 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (2.066552827s)
	I1219 03:54:54.655390   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:54:54.655405   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.047371795s)
	I1219 03:54:54.655528   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.023298979s)
	I1219 03:54:54.655657   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.794802456s)
	I1219 03:54:54.655684   56230 addons.go:500] Verifying addon metrics-server=true in "default-k8s-diff-port-168174"
	I1219 03:54:57.969258   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.313828747s)
	I1219 03:54:57.969346   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:58.498709   56230 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-168174"
	I1219 03:54:58.501734   56230 out.go:179] * Verifying dashboard addon...
	I1219 03:54:58.504348   56230 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:54:58.510036   56230 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:54:58.510056   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.010436   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.532022   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.031585   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.531985   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.032925   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.533378   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.032504   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.530653   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.031045   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.531549   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.030879   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.502926   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.001807   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.501672   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.001239   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.501991   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.001622   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.501692   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.002517   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.502331   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.001757   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.508121   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.008244   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.507884   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.008223   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.507861   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.012677   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.507898   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.008121   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.508367   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.008842   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.531235   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.031845   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.531542   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.030822   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.532087   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.032140   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.532095   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.032183   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.532546   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.031699   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.501262   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.001782   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.501640   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.002705   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.501849   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.001647   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.502225   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.002170   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.502397   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.003244   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.508346   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.007493   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.507987   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.007825   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.507635   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.008062   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.507047   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.008442   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.510089   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.008180   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.536198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.032221   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.532227   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.032198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.531813   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.031889   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.531666   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.031122   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.532149   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.031983   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.502642   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.001743   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.502066   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.002044   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.502017   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.002386   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.502467   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.002107   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.502677   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.002161   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.507112   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.008461   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.508312   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.008611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.508384   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.008280   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.508541   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.008623   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.508431   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.009349   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.532619   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.031875   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.532589   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.031244   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.531877   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.031690   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.531758   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.032196   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.531803   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.030943   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.502018   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.002044   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.502072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.002330   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.502958   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.001850   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.501605   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.001853   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.501780   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.001784   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.508124   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.008333   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.508545   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.008130   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.507985   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.007539   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.508141   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.007919   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.507523   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.008842   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.532418   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.032219   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.532547   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.032233   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.532551   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.033166   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.531532   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.031971   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.532050   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.032787   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.501956   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.002220   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.502226   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.003355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.501800   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.001708   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.501127   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.003195   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.502775   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.002072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.507432   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.008746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.508268   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.008770   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.508749   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.009746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.509595   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.008351   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.508700   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.009427   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.532398   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.033297   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.531966   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.032953   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.532813   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.032632   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.531743   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.031446   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.531999   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.032229   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.502200   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.002490   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.502281   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.002814   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.502200   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.001250   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.502303   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.003201   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.501897   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.001740   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.508429   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.008390   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.507941   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.007624   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.508269   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.008250   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.508598   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.008371   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.508380   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.008493   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.531979   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.031753   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.531087   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.031427   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.533856   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.032558   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.532153   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.031923   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.531960   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.032601   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.501692   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.001922   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.501325   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.003828   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.502896   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.002912   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.501760   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.001551   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.503707   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.002109   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.508499   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.009212   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.508512   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.008223   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.508681   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.008636   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.508533   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.008248   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.507749   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.010179   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.531439   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.033650   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.532006   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.033362   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.532163   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.032485   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.532885   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.032795   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.531588   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.032179   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.502338   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.001955   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.502091   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.002307   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.502793   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.000849   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.501606   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.001431   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.502037   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.001873   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.507899   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.009735   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.508708   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.008927   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.508321   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.008289   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.507348   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.009029   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.507232   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.007368   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.532210   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.032304   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.531955   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.031259   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.532301   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.032157   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.531594   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.032495   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.532008   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.032133   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.501770   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.002435   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.502300   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.002307   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.501724   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.002293   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.503636   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.001410   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.504029   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.001789   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.508096   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.009356   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.507852   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.007460   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.508444   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.008364   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.507697   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.008880   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.508861   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.008835   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.532091   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.032010   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.531306   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.031852   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.531186   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.032131   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.531205   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.032529   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.532677   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.033016   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.502472   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.001435   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.502091   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.001734   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.501352   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.002340   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.502315   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.002534   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.501024   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.001249   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.507519   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.008950   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.507774   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.009594   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.507884   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.007928   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.507777   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.009168   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.507455   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.009287   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.532161   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.032066   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.531975   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.031583   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.532654   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.033122   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.531676   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.031185   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.532468   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.032385   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.501786   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.002482   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.502524   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.001342   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.502134   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.003763   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.502136   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.001766   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.502345   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.001599   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.508543   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.009242   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.508054   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.009144   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.508104   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.008088   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.507250   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.009098   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.507611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.010519   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.531780   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.031001   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.532489   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.032242   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.536320   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.033455   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.532129   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.031767   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.531204   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.031365   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.503558   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.001726   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.501144   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.001613   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.502734   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.002274   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.501831   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.001426   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.503884   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.001283   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.508611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.009353   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.507657   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.007664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.507861   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.007544   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.507689   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.008330   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.508469   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.009715   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.532345   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.032801   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.531689   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.032877   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.532081   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.032107   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.531920   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.031409   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.532046   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.032408   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.501828   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.001518   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.502563   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.002507   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.501333   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.002564   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.502379   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.001485   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.501810   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.001402   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.508191   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.008241   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.508833   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.008453   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.508563   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.008613   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.509524   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.008844   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.507854   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.007055   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.532493   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.033676   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.532206   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.031784   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.532118   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.032496   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.532286   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.032724   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.533137   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.031294   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.502666   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.001524   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.501177   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.001644   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.503328   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.002433   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.502361   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.002735   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.501301   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.001765   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.508242   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.008660   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.507962   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.008796   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.508749   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.009651   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.508080   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.008550   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.509473   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.008511   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.533457   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.032379   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.532473   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.032865   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.531464   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.032764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.531236   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.032148   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.532471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.032216   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.502684   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.002188   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.503237   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.001912   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.501622   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.001891   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.502012   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.001650   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.502856   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.001921   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.507699   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.008027   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.508703   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.008209   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.508178   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.008432   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.509550   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.008319   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.507828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.007561   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.531544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.032519   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.532198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.032915   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.531514   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.032723   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.531505   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.033182   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.531615   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.032916   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.501854   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.001080   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.503363   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.002618   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.502840   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.000881   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.501714   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.002610   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.502008   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.001866   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.508549   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.007753   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.508465   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.008319   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.508222   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.007904   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.508163   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.008464   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.508145   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.007728   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.531667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.033191   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.531547   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.032788   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.532591   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.033086   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.531237   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.032101   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.532279   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.032764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.501636   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.001241   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.501915   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.001761   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.501797   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.001851   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.502732   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.001114   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.502538   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.001630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.508503   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.009432   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.508442   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.008564   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.508754   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.008668   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.508947   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.007984   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.507426   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.008776   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.531412   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.031826   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.531169   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.032838   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.531368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.033085   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.531343   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.032505   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.532373   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.032078   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.501962   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.001801   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.502380   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.001940   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.501661   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.001355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.501727   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.002704   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.502515   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.001261   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.508926   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.008697   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.508155   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.008403   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.509752   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.009152   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.507692   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.008539   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.508833   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.008403   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.532212   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.031709   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.531512   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.032880   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.531683   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.032225   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.532187   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.032017   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.530954   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.031969   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.502513   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.001736   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.502118   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.001728   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.502479   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.002783   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.502414   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.002781   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.501809   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.002598   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.507936   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.007414   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.508924   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.007756   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.509607   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.008188   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.508901   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.009164   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.507936   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.007349   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.532294   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.033050   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.532115   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.031971   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.531279   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.032256   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.531863   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.031763   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.531164   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.031290   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.502730   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.001984   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.502287   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.002224   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.502985   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.000948   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.501630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.001169   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.502075   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.002834   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.508225   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.007739   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.508108   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.008481   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.508746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.008298   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.507944   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.008428   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.507905   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.007728   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.531448   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.032595   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.532096   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.031394   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.532851   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.032534   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.532843   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.031994   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.533667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.033061   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.501275   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.003274   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.502492   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.002263   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.501814   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.002188   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.502456   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.002449   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.503413   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.002514   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.508385   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.008219   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.509237   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.007998   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.507734   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.008610   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.509142   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.008330   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.507609   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.009119   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.531626   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.032337   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.532298   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.032378   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.531679   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.032529   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.532155   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.031828   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.531299   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.031239   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.502830   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.001989   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.502331   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.002798   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.502197   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.001852   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.501952   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.001753   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.501421   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.002328   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.508315   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.008862   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.508067   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.008030   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.507755   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.008786   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.507672   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.009365   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.509016   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.007277   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.531667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.031610   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.532096   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.032319   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.532500   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.031773   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.531561   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.032598   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.531974   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.031362   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.502152   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.001130   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.502479   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.002251   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.501762   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.000846   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.502253   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.002765   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.502160   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.001409   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.508190   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.008459   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.508829   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.007664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.509469   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.009747   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.509579   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.009682   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.508738   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.008970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.532197   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.031685   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.532322   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.031885   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.531778   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.031643   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.531467   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.031815   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.531155   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.031720   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.503475   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.001639   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.501436   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.002712   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.501897   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.001181   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.501530   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.000985   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.501730   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.001514   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.508263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.007505   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.508726   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.008230   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.508664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.008997   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.507428   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.008379   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.508549   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.007995   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.531536   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.032617   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.535990   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.031685   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.533156   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.031587   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.532830   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.031367   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.532930   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.031943   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.502386   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.002215   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.503037   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.001428   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.502319   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.002311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.502140   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.002283   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.502150   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.002240   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.507946   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.008416   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.508631   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.008561   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.508912   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.008658   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.509386   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.008665   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.509011   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.008072   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.533032   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.032143   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.531588   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.032371   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.533496   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.033023   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.531133   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.032394   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.532243   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.031898   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.502405   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.001877   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.505174   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.002029   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.502125   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.001824   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.501660   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.002257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.502497   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.002911   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.509042   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.008740   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.507989   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.007873   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.508285   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.007091   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.508238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.508597   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.009516   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.531381   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.032162   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.531649   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.032718   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.532156   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.033496   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.533930   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.032368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.532625   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.032661   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.501952   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.001604   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.501905   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.002311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.501777   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.001546   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.502154   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.002455   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.503055   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.001472   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.508050   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.008080   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.507970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.007844   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.508056   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.007765   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.508456   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.007981   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.508855   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.008604   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.532081   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.032702   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.531078   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.031663   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.531993   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.033077   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.531457   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.032927   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.531699   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.031008   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.502839   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.001682   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.501484   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.003428   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.502649   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.002047   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.501936   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.001951   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.502955   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.002709   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.509628   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.008629   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.509037   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.008098   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.508408   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.009392   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.507832   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.008540   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.509468   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.008988   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.532091   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.032487   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.532767   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.032348   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.533265   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.032832   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.533225   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.032480   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.531859   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.031535   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.502389   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.002611   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.501620   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.002058   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.502778   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.002073   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.501287   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.001492   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.503034   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.002058   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.507218   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.008007   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.507903   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.008002   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.508538   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.009106   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.509031   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.008019   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.508250   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.009604   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.532463   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.032668   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.531757   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.031273   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.533278   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.032950   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.531375   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.032433   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.532764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.031941   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.501829   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.001397   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.502802   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.001851   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.503206   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.001481   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.502653   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.002180   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.501887   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.001927   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.509024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.007589   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.509073   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.008555   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.508449   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.008256   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.508501   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.009916   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.508490   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.008336   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.531904   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.031168   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.532025   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.032276   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.531968   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.032764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.531973   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.031624   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.532201   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.032129   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.502278   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.001507   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.501338   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.002753   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.501620   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.001545   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.502545   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.001650   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.501704   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.001060   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.508006   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.007837   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.509358   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.508132   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.007983   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.508981   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.007803   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.507769   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.009970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.532685   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.033023   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.531348   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.031614   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.533370   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.032795   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.531237   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.032033   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.532778   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.031294   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.502337   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.002204   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.501845   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.002344   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.503195   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.002894   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.501979   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.002008   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.501981   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.001740   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.507806   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.009357   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.508695   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.008959   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.509725   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.008245   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.507606   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.008218   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.507870   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.007087   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.532257   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.032024   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.532220   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.031647   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.532123   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.032889   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.532444   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.032621   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.532943   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.031712   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.501355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.002083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.501469   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.002554   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.501408   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.002216   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.502543   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.001754   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.501454   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.002870   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.507033   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.007862   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.509097   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.008460   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.509108   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.007794   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.508514   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.009784   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.508154   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.008565   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.531552   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.032724   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.531968   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.031728   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.531786   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.032471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.531802   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.032162   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.532320   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.031297   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.503203   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.002682   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.502066   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.001775   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.501403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.002298   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.502073   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.001483   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.501639   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.002266   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.508296   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.008881   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.508078   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.007871   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.508564   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.008609   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.507625   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.008815   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.507996   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.009033   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.531916   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.032003   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.535669   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.032260   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.533368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.032732   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.532734   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.031076   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.531706   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.031411   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.502350   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.002202   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.502113   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.002611   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.501323   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.002251   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.501726   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.003470   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.502490   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.002224   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.507379   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.007665   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.508534   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.009007   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.509344   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.007746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.508532   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.009346   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.507367   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.009828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.532471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.032182   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.531696   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.031891   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.531523   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.032527   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.531544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.033055   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.532251   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.032012   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.501701   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.001815   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.501403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.001721   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.502408   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.006350   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.502718   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.000975   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.502050   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.001993   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.507665   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.010022   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.507891   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.017962   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.509387   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.009499   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.508592   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.007712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.509159   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.008024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.532417   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.032702   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.531960   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.032030   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.532438   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.032562   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.532541   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.031906   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.533707   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.031481   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.501333   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.002706   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.501390   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.003083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.501477   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.003243   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.502051   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.002119   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.502250   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.001892   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.508467   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.007934   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.508461   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.009263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.508676   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.007597   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.508263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.008661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.508545   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.008653   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.533009   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.032493   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.532027   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.032116   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.531261   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.034181   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.531702   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.032409   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.533808   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.031246   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.501444   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.002084   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.501717   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.002397   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.502329   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.002161   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.501724   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.001096   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.501676   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.001373   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.508793   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.009558   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.508307   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.008745   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.508478   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.009365   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.508285   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.008394   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.507659   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.008883   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.531671   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.032663   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.532654   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.032443   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.531860   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.031786   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.532418   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.032031   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.531026   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.031184   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.502311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.001761   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.501921   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.001779   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.502884   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.000815   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.502204   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.002552   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.502487   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.002005   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.509248   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.008315   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.507712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.009764   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.509368   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.007428   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.508548   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.008954   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.508930   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.008936   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.532311   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.032156   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.531768   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.031259   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.532112   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.032440   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.533083   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.031470   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.533077   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.031626   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.503116   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.002138   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.502257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.002721   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.501511   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.002183   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.502306   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.002714   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.501224   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.003247   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.508715   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.008752   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.509114   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.007677   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.508804   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.009618   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.508120   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.007885   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.507480   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.008978   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.532146   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.031615   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.532552   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.031381   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.532386   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.032461   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.533200   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.032375   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.531718   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.030828   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.502028   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.001762   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.501418   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.002914   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.501869   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.001896   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.501339   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.002565   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.502667   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.001134   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.507828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.008203   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.508364   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.008929   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.508117   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.007662   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.507899   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.008710   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.507212   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.008474   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.532845   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.032290   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.532646   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.031957   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.531378   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.032264   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.531928   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.031473   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.532386   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.032382   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.502231   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.002752   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.500970   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.000924   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.501030   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.002189   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.502781   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.002623   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.501117   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.001792   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.508109   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.008892   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.508228   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.007643   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.508278   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.009399   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.508216   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.008474   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.507952   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.008596   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.532465   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.032800   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.531643   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.031616   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.533745   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.031460   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.532616   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.032471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.532228   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.031437   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.501355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.001764   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.501298   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.003052   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.502950   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.001770   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.501738   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.003204   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.503749   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.000964   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.508615   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.009187   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.507594   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.009258   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.508166   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.008876   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.508828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.009323   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.507635   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.008857   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.532499   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.033303   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.532140   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.031451   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.532012   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.031739   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.531969   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.031026   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.531884   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.032850   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.501466   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.002962   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.501319   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.002095   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.501455   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.002904   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.502079   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.002351   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.502139   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.002366   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.507536   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.009458   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.508342   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.008114   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.507689   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.008772   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.508175   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.008253   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.508521   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.010486   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.531019   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.032116   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.531731   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.031746   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.531610   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.032124   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.531488   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.032358   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.532561   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.032192   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.502021   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.001431   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.502831   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.001874   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.501461   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.002135   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.502101   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.002403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.501826   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.001388   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.508693   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.008934   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.507098   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.007956   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.508938   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.007971   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.508613   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.009088   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.507422   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.008448   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.531909   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.031872   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.532556   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.032306   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.532154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.032667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.531742   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.032077   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.531946   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.033451   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.502067   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.002320   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.501957   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.002135   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.501241   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.002784   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.502988   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.004826   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.502313   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.002638   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.507745   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.009163   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.508092   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.008607   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.508116   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.008371   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.507434   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.008847   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.507621   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.008655   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.532124   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.032109   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.531627   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.031388   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.532769   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.031521   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.531483   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.032091   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.532187   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.031753   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.502460   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.002540   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.501945   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.002223   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.501542   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.001659   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.501286   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.002482   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.502722   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.001266   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.507988   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.009496   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.509180   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.008698   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.508772   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.008904   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.508816   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.009066   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.507818   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.008395   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.531785   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.031722   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.531144   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.031857   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.531058   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.032168   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.532777   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.032608   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.531658   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.032994   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.501701   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.002308   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.502069   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.002072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.501731   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.002148   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.503078   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.003123   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.501899   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.002103   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.507702   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.009409   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.508752   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.009166   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.508117   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.009342   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.508229   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.007650   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.514151   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.008149   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.531183   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.030952   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.531064   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.032714   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.532410   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.031666   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.531454   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.032368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.532161   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.031779   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.502176   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.001419   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.502257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.002485   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.501904   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.001645   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.501262   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.002789   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.502720   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.001933   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.507580   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.008671   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.508761   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.009888   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.508049   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.009018   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.508299   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.009024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.507584   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.008065   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.530966   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.031880   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.531265   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.031652   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.532860   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.031804   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.532296   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.031908   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.531566   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.031333   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.501384   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.002328   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.501432   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.002402   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.502445   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.004922   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.501916   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.002619   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.501038   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.001821   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.507960   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.008882   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.508735   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.009370   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.508266   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.009541   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.508167   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.008293   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.509228   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.008514   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.531404   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.032313   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.532704   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.033420   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.532159   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.032178   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.531613   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.035741   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.532501   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.033104   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.502173   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.002026   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.501239   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.001300   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.503227   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.001826   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.501434   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.003235   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.502432   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.002356   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.507884   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.008334   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.508168   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.008274   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.508025   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.008228   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.507713   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.008537   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.508684   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.009919   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.532599   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.035420   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.531992   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.031944   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.531194   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.032224   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.531672   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.031544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.531967   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.031448   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.501782   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.001444   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.503454   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.002767   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.501906   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.001726   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.502123   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.005942   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.501817   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.001941   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.507853   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.008476   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.508667   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.008722   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.509046   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.008778   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.508906   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.008492   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.508647   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.007815   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.532200   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.031966   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.531791   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.033536   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.532652   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.032201   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.531928   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.033359   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.533670   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.032187   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.501934   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.002902   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.501267   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.002601   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.501489   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.002545   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.501360   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.002042   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.503032   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.001085   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.507611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.008196   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.509732   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.009055   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.508388   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.007995   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.507537   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.008854   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.508167   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.008019   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.531647   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.034444   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.532628   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.032333   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.531736   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.032056   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.531916   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.031464   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.532198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.032089   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.501603   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.001216   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.502879   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.001292   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.501341   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.002410   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.502804   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.002021   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.502279   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.002340   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.507566   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.008774   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.509162   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.009209   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.507648   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.009824   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.508631   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.009013   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.507653   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.008464   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.531694   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.032157   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.532431   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.031890   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.533074   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.032602   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.531803   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.032839   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.531064   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.033390   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.502372   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.001862   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.502294   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.001477   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.503184   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.002218   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.502643   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.001877   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.503311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.002436   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.508296   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.008304   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.508381   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.008490   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.508829   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.007834   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.508400   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.008794   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.509376   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.008146   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.531920   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.033659   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.532892   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.031391   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.532537   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.033029   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.530956   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.031585   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.533148   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.031532   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.502341   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.002087   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.501994   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.001651   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.501441   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.002140   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.501765   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.001241   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.502479   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.002437   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.508235   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.008483   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.508534   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.008744   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.508702   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.008924   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.507985   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.007421   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.507911   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.008590   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.532045   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.031418   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.532867   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.031333   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.532360   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.032704   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.531535   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.033276   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.532090   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.032674   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.503195   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.001544   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.501650   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.001446   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.503141   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.001293   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.501933   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.001485   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.501393   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.001793   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.508830   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.008286   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.508322   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.008679   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.509263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.008010   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.507661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.008954   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.508712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.008648   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.531115   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.033681   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.532204   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.031525   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.532706   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.031154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.531400   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.032686   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.531016   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.031694   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.500799   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.001437   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.503087   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.001262   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.502070   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.001597   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.501748   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.000952   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.503068   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.002924   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.508721   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.009360   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.507561   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.008024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.509438   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.008003   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.509182   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.007694   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.509204   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.008075   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.531475   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.032236   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.531623   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.032627   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.531328   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.032263   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.531544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.031759   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.531968   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.031169   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.502523   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.001089   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.502166   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.002297   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.501900   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.002177   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.501990   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.001761   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.503411   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.001888   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.508661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.008645   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.509700   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.509485   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.008711   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.508528   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.009157   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.508329   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.008371   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.532470   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.033506   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.532332   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.032618   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.532408   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.032700   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.532680   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.030763   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.531486   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.032694   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.501870   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.001255   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.502146   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.001892   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.502373   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.001923   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.502476   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.001982   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.502446   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.003222   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.508346   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.008513   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.509470   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.009002   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.508067   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.007514   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.508798   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.008828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.508496   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.531146   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.031591   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.532375   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.033082   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.531649   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.031902   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.532588   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.032880   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.532136   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.028606   55595 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 04:00:01.028642   55595 kapi.go:107] duration metric: took 6m0.000598506s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 04:00:01.028754   55595 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 04:00:01.030295   55595 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1219 04:00:01.031288   55595 addons.go:546] duration metric: took 6m6.695311639s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1219 04:00:01.031318   55595 start.go:247] waiting for cluster config update ...
	I1219 04:00:01.031329   55595 start.go:256] writing updated cluster config ...
	I1219 04:00:01.031596   55595 ssh_runner.go:195] Run: rm -f paused
	I1219 04:00:01.039401   55595 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:01.043907   55595 pod_ready.go:83] waiting for pod "coredns-7d764666f9-s7729" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.050711   55595 pod_ready.go:94] pod "coredns-7d764666f9-s7729" is "Ready"
	I1219 04:00:01.050733   55595 pod_ready.go:86] duration metric: took 6.803187ms for pod "coredns-7d764666f9-s7729" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.053765   55595 pod_ready.go:83] waiting for pod "etcd-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.058213   55595 pod_ready.go:94] pod "etcd-no-preload-298059" is "Ready"
	I1219 04:00:01.058234   55595 pod_ready.go:86] duration metric: took 4.447718ms for pod "etcd-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.060300   55595 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.065142   55595 pod_ready.go:94] pod "kube-apiserver-no-preload-298059" is "Ready"
	I1219 04:00:01.065166   55595 pod_ready.go:86] duration metric: took 4.840116ms for pod "kube-apiserver-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.067284   55595 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.445171   55595 pod_ready.go:94] pod "kube-controller-manager-no-preload-298059" is "Ready"
	I1219 04:00:01.445200   55595 pod_ready.go:86] duration metric: took 377.900542ms for pod "kube-controller-manager-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.645417   55595 pod_ready.go:83] waiting for pod "kube-proxy-mdfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.044330   55595 pod_ready.go:94] pod "kube-proxy-mdfxl" is "Ready"
	I1219 04:00:02.044377   55595 pod_ready.go:86] duration metric: took 398.907218ms for pod "kube-proxy-mdfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.245766   55595 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.645879   55595 pod_ready.go:94] pod "kube-scheduler-no-preload-298059" is "Ready"
	I1219 04:00:02.645937   55595 pod_ready.go:86] duration metric: took 400.143888ms for pod "kube-scheduler-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.645954   55595 pod_ready.go:40] duration metric: took 1.606522986s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:02.697158   55595 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 04:00:02.698980   55595 out.go:179] * Done! kubectl is now configured to use "no-preload-298059" cluster and "default" namespace by default
	I1219 03:59:58.502543   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.001139   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.501649   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.001415   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.502374   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.002272   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.502072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.002694   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.501377   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.002499   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.508999   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.009465   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.508462   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.008805   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.509068   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.007682   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.508807   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.009533   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.509171   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.008344   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.501482   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.002080   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.502514   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.002257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.502741   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.001565   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.502968   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.002364   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.502630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.001239   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.508168   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.007952   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.508714   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.008805   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.508239   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.009278   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.509811   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.008945   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.513267   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.008127   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.502641   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.002630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.501272   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.001592   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.502177   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.002030   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.501972   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.001917   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.502061   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.001824   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.508106   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.007937   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.507970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.008418   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.508614   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.007994   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.508452   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.008632   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.510343   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.008029   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.501559   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.000819   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.501592   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.002062   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.501962   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.001720   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.502083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.002024   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.501681   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.001502   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.507866   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.009254   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.508704   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.008650   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.508846   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.010798   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.507933   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.009073   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.508337   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.008331   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.502462   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.003975   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.501373   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.002075   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.502437   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.001953   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.501417   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.003083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.501515   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.001553   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.509712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.008196   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.507361   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.008284   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.508302   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.007728   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.509259   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.008711   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.509664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.008507   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.502079   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.001986   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.501922   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.001179   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.502972   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.002165   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.502809   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.001369   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.502083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.002507   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.508264   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.008006   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.509488   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.008519   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.508978   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.008309   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.508775   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.009625   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.508731   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.009043   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.502787   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.001831   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.502430   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.998860   55957 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 04:00:29.998886   55957 kapi.go:107] duration metric: took 6m0.000824832s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 04:00:29.998960   55957 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 04:00:30.000498   55957 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1219 04:00:30.001513   55957 addons.go:546] duration metric: took 6m7.141140342s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1219 04:00:30.001540   55957 start.go:247] waiting for cluster config update ...
	I1219 04:00:30.001550   55957 start.go:256] writing updated cluster config ...
	I1219 04:00:30.001800   55957 ssh_runner.go:195] Run: rm -f paused
	I1219 04:00:30.010656   55957 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:30.015390   55957 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9ptrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.020029   55957 pod_ready.go:94] pod "coredns-66bc5c9577-9ptrv" is "Ready"
	I1219 04:00:30.020051   55957 pod_ready.go:86] duration metric: took 4.638733ms for pod "coredns-66bc5c9577-9ptrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.022246   55957 pod_ready.go:83] waiting for pod "etcd-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.026208   55957 pod_ready.go:94] pod "etcd-embed-certs-244717" is "Ready"
	I1219 04:00:30.026224   55957 pod_ready.go:86] duration metric: took 3.954396ms for pod "etcd-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.028026   55957 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.033934   55957 pod_ready.go:94] pod "kube-apiserver-embed-certs-244717" is "Ready"
	I1219 04:00:30.033951   55957 pod_ready.go:86] duration metric: took 5.905842ms for pod "kube-apiserver-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.036019   55957 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.417680   55957 pod_ready.go:94] pod "kube-controller-manager-embed-certs-244717" is "Ready"
	I1219 04:00:30.417709   55957 pod_ready.go:86] duration metric: took 381.673199ms for pod "kube-controller-manager-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.616122   55957 pod_ready.go:83] waiting for pod "kube-proxy-p8gvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.015548   55957 pod_ready.go:94] pod "kube-proxy-p8gvm" is "Ready"
	I1219 04:00:31.015585   55957 pod_ready.go:86] duration metric: took 399.442531ms for pod "kube-proxy-p8gvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.216107   55957 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.615784   55957 pod_ready.go:94] pod "kube-scheduler-embed-certs-244717" is "Ready"
	I1219 04:00:31.615816   55957 pod_ready.go:86] duration metric: took 399.682179ms for pod "kube-scheduler-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.615832   55957 pod_ready.go:40] duration metric: took 1.605153664s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:31.662639   55957 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 04:00:31.664208   55957 out.go:179] * Done! kubectl is now configured to use "embed-certs-244717" cluster and "default" namespace by default
	I1219 04:00:29.508455   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:30.007925   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:30.507876   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:31.007766   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:31.509691   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:32.008321   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:32.509128   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:33.008511   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:33.509110   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:34.008834   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:34.508661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:35.009145   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:35.510268   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:36.007810   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:36.508457   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:37.008511   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:37.508340   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:38.008906   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:38.508226   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:39.007515   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:39.508398   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:40.008048   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:40.507411   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:41.008044   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:41.509491   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:42.008720   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:42.508893   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:43.008890   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:43.507746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:44.008735   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:44.508515   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:45.008316   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:45.508925   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:46.007410   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:46.507809   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:47.007816   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:47.507934   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:48.008317   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:48.511438   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:49.008355   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:49.508479   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:50.008867   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:50.507492   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:51.008220   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:51.508283   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:52.008800   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:52.508617   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:53.008464   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:53.508878   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:54.008198   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:54.509007   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:55.007919   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:55.507118   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:56.008201   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:56.507989   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:57.007872   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:57.508142   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:58.008008   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:58.504601   56230 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 04:00:58.504633   56230 kapi.go:107] duration metric: took 6m0.000289249s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 04:00:58.504722   56230 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 04:00:58.506261   56230 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1219 04:00:58.507432   56230 addons.go:546] duration metric: took 6m6.536744168s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1219 04:00:58.507471   56230 start.go:247] waiting for cluster config update ...
	I1219 04:00:58.507487   56230 start.go:256] writing updated cluster config ...
	I1219 04:00:58.507818   56230 ssh_runner.go:195] Run: rm -f paused
	I1219 04:00:58.516094   56230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:58.521203   56230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dnfcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.526011   56230 pod_ready.go:94] pod "coredns-66bc5c9577-dnfcc" is "Ready"
	I1219 04:00:58.526035   56230 pod_ready.go:86] duration metric: took 4.809568ms for pod "coredns-66bc5c9577-dnfcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.528592   56230 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.534102   56230 pod_ready.go:94] pod "etcd-default-k8s-diff-port-168174" is "Ready"
	I1219 04:00:58.534119   56230 pod_ready.go:86] duration metric: took 5.507213ms for pod "etcd-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.536078   56230 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.540931   56230 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-168174" is "Ready"
	I1219 04:00:58.540951   56230 pod_ready.go:86] duration metric: took 4.854792ms for pod "kube-apiserver-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.542905   56230 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.920520   56230 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-168174" is "Ready"
	I1219 04:00:58.920546   56230 pod_ready.go:86] duration metric: took 377.623833ms for pod "kube-controller-manager-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:59.120738   56230 pod_ready.go:83] waiting for pod "kube-proxy-zs4wg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:59.520222   56230 pod_ready.go:94] pod "kube-proxy-zs4wg" is "Ready"
	I1219 04:00:59.520254   56230 pod_ready.go:86] duration metric: took 399.487462ms for pod "kube-proxy-zs4wg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:59.721383   56230 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:01:00.120982   56230 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-168174" is "Ready"
	I1219 04:01:00.121009   56230 pod_ready.go:86] duration metric: took 399.598924ms for pod "kube-scheduler-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:01:00.121020   56230 pod_ready.go:40] duration metric: took 1.604899766s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:01:00.167943   56230 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 04:01:00.169437   56230 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-168174" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.525718801Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117401525697344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136931,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd75d63a-5844-4ad4-9172-12ec01e16871 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.526481099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f53c305c-f6a1-4ea4-8951-004150bdf9d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.526532706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f53c305c-f6a1-4ea4-8951-004150bdf9d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.526718794Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26e6ee81f646c72c850b70ecca35f7adee442aecb0b43b306d3cdeee6026f584,PodSandboxId:8345a5965eace8c7c7cc6f9373bab32a2a8314386042f84afb02313b2048d9b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116522347035725,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1d1888-a950-48d5-9b73-440e7556818b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2caca388dcae558e2cf45b546d63b95993ed6f874effdb03ad5e5911235916da,PodSandboxId:4f012f54328ca4f5304cbd68bdc8bbdfa10180a294a8b12faebe143bbe3f41ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116502259145777,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca7cb580-0d7c-401c-8d64-c5bb86760477,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b125b6cafad9798dcaaa4e59d74adde5911caa574d34924fdd4022bbe679a,PodSandboxId:6925fe331fb33829ee17d854417078bf77251255616425c7cf53183387396a23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766116498385943301,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dnfcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8ee66a7-b129-4499-aad8-a988ecea241c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9a2aa1cbfadbd847de6cbbc990670ed05c34778fbe25ce4972a4762b0e5a14,PodSandboxId:5cce2c4d9c72d6c8364f4acb895c868c282b9883caf7d7d2964785eff615dc54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766116491940005380,Labels:map[string]string{io.kuber
netes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zs4wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2212782c-32ab-4355-8dda-9117953b0223,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7628d157bc011d749b031c83d671751b03a44c981e27b8e1cea9a8de01cb72,PodSandboxId:8345a5965eace8c7c7cc6f9373bab32a2a8314386042f84afb02313b2048d9b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116491309986286,Labels:map[string]string{io.kubernetes.container.name: sto
rage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1d1888-a950-48d5-9b73-440e7556818b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7b7fbe883018669fcd7b491b5ffd6260c3c94de5b5212ff255578f1329f11cc,PodSandboxId:4983aae39c7cb6080ad85fdd5a206a99db267bc8a565800f67829f7fd83d31d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766116486801027255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e108fdf17cc8f56b9a0db5bf73d6cb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a59d170b8ca5d86fcf9375544c790fe7c5d51034aff8c9daaf79524ddd0f122,PodSandboxId:1b2fac58f6e514929f7ca9aaa298e393cfd7bc1493f5586e7ad68076fe6a45b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:176611
6486784784284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8cfd1777a85573c752d6ea56606951,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47bef8ab5b6819f147cc9370436632c70fad252c4b7755f0240da10cf676d8a,PodSandboxId:1cf243a559ad56b72baae71b02b1497fc03615938731b5ebfe2ac4fe8f830c3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766116486737925173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55409b5058745059effbb614ceaaeabc,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1efb3e359c44d7879c1ca0d876346921845fcdd8a36efd9bb646db8301f8f5f,PodSandboxId:a1b44c869cec4ea46ff8e37c6fcca45502228a99c6b37d1f8a867f6427e6bb89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:
5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766116486704823215,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d023f6143e50cea3e3be2ba5b8c07a72,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f53c305c-f6a1-4ea4-8951-004150bdf9d7 name=/runtime.v
1.RuntimeService/ListContainers
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.561526480Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d16bb2f2-2373-4e58-b0cc-9caec6a1eede name=/runtime.v1.RuntimeService/Version
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.561594218Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d16bb2f2-2373-4e58-b0cc-9caec6a1eede name=/runtime.v1.RuntimeService/Version
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.563935093Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=665b5420-97a1-44d8-b060-255b820129ef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.564585011Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117401564561057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136931,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=665b5420-97a1-44d8-b060-255b820129ef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.565783347Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d4a2d4e-cb6f-4b1e-9a09-1d2a0921a3b1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.565847632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d4a2d4e-cb6f-4b1e-9a09-1d2a0921a3b1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.566042160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26e6ee81f646c72c850b70ecca35f7adee442aecb0b43b306d3cdeee6026f584,PodSandboxId:8345a5965eace8c7c7cc6f9373bab32a2a8314386042f84afb02313b2048d9b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116522347035725,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1d1888-a950-48d5-9b73-440e7556818b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2caca388dcae558e2cf45b546d63b95993ed6f874effdb03ad5e5911235916da,PodSandboxId:4f012f54328ca4f5304cbd68bdc8bbdfa10180a294a8b12faebe143bbe3f41ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116502259145777,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca7cb580-0d7c-401c-8d64-c5bb86760477,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b125b6cafad9798dcaaa4e59d74adde5911caa574d34924fdd4022bbe679a,PodSandboxId:6925fe331fb33829ee17d854417078bf77251255616425c7cf53183387396a23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766116498385943301,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dnfcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8ee66a7-b129-4499-aad8-a988ecea241c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9a2aa1cbfadbd847de6cbbc990670ed05c34778fbe25ce4972a4762b0e5a14,PodSandboxId:5cce2c4d9c72d6c8364f4acb895c868c282b9883caf7d7d2964785eff615dc54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766116491940005380,Labels:map[string]string{io.kuber
netes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zs4wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2212782c-32ab-4355-8dda-9117953b0223,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7628d157bc011d749b031c83d671751b03a44c981e27b8e1cea9a8de01cb72,PodSandboxId:8345a5965eace8c7c7cc6f9373bab32a2a8314386042f84afb02313b2048d9b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116491309986286,Labels:map[string]string{io.kubernetes.container.name: sto
rage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1d1888-a950-48d5-9b73-440e7556818b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7b7fbe883018669fcd7b491b5ffd6260c3c94de5b5212ff255578f1329f11cc,PodSandboxId:4983aae39c7cb6080ad85fdd5a206a99db267bc8a565800f67829f7fd83d31d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766116486801027255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e108fdf17cc8f56b9a0db5bf73d6cb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a59d170b8ca5d86fcf9375544c790fe7c5d51034aff8c9daaf79524ddd0f122,PodSandboxId:1b2fac58f6e514929f7ca9aaa298e393cfd7bc1493f5586e7ad68076fe6a45b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:176611
6486784784284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8cfd1777a85573c752d6ea56606951,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47bef8ab5b6819f147cc9370436632c70fad252c4b7755f0240da10cf676d8a,PodSandboxId:1cf243a559ad56b72baae71b02b1497fc03615938731b5ebfe2ac4fe8f830c3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766116486737925173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55409b5058745059effbb614ceaaeabc,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1efb3e359c44d7879c1ca0d876346921845fcdd8a36efd9bb646db8301f8f5f,PodSandboxId:a1b44c869cec4ea46ff8e37c6fcca45502228a99c6b37d1f8a867f6427e6bb89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:
5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766116486704823215,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d023f6143e50cea3e3be2ba5b8c07a72,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d4a2d4e-cb6f-4b1e-9a09-1d2a0921a3b1 name=/runtime.v
1.RuntimeService/ListContainers
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.595226730Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9cc2b790-f3bf-446b-8e59-d4d74552fcf7 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.595301504Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9cc2b790-f3bf-446b-8e59-d4d74552fcf7 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.596619257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d73e25a-7b01-4aab-9d13-d7a15620902e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.597274915Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117401597253272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136931,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d73e25a-7b01-4aab-9d13-d7a15620902e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.598360230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21c41bcf-8f7c-43e6-9772-72ccf8649a1e name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.598671384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21c41bcf-8f7c-43e6-9772-72ccf8649a1e name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.598915768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26e6ee81f646c72c850b70ecca35f7adee442aecb0b43b306d3cdeee6026f584,PodSandboxId:8345a5965eace8c7c7cc6f9373bab32a2a8314386042f84afb02313b2048d9b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116522347035725,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1d1888-a950-48d5-9b73-440e7556818b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2caca388dcae558e2cf45b546d63b95993ed6f874effdb03ad5e5911235916da,PodSandboxId:4f012f54328ca4f5304cbd68bdc8bbdfa10180a294a8b12faebe143bbe3f41ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116502259145777,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca7cb580-0d7c-401c-8d64-c5bb86760477,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b125b6cafad9798dcaaa4e59d74adde5911caa574d34924fdd4022bbe679a,PodSandboxId:6925fe331fb33829ee17d854417078bf77251255616425c7cf53183387396a23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766116498385943301,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dnfcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8ee66a7-b129-4499-aad8-a988ecea241c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9a2aa1cbfadbd847de6cbbc990670ed05c34778fbe25ce4972a4762b0e5a14,PodSandboxId:5cce2c4d9c72d6c8364f4acb895c868c282b9883caf7d7d2964785eff615dc54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766116491940005380,Labels:map[string]string{io.kuber
netes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zs4wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2212782c-32ab-4355-8dda-9117953b0223,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7628d157bc011d749b031c83d671751b03a44c981e27b8e1cea9a8de01cb72,PodSandboxId:8345a5965eace8c7c7cc6f9373bab32a2a8314386042f84afb02313b2048d9b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116491309986286,Labels:map[string]string{io.kubernetes.container.name: sto
rage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1d1888-a950-48d5-9b73-440e7556818b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7b7fbe883018669fcd7b491b5ffd6260c3c94de5b5212ff255578f1329f11cc,PodSandboxId:4983aae39c7cb6080ad85fdd5a206a99db267bc8a565800f67829f7fd83d31d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766116486801027255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e108fdf17cc8f56b9a0db5bf73d6cb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a59d170b8ca5d86fcf9375544c790fe7c5d51034aff8c9daaf79524ddd0f122,PodSandboxId:1b2fac58f6e514929f7ca9aaa298e393cfd7bc1493f5586e7ad68076fe6a45b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:176611
6486784784284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8cfd1777a85573c752d6ea56606951,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47bef8ab5b6819f147cc9370436632c70fad252c4b7755f0240da10cf676d8a,PodSandboxId:1cf243a559ad56b72baae71b02b1497fc03615938731b5ebfe2ac4fe8f830c3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766116486737925173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55409b5058745059effbb614ceaaeabc,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1efb3e359c44d7879c1ca0d876346921845fcdd8a36efd9bb646db8301f8f5f,PodSandboxId:a1b44c869cec4ea46ff8e37c6fcca45502228a99c6b37d1f8a867f6427e6bb89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:
5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766116486704823215,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d023f6143e50cea3e3be2ba5b8c07a72,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21c41bcf-8f7c-43e6-9772-72ccf8649a1e name=/runtime.v
1.RuntimeService/ListContainers
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.630763894Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59e97531-b508-41eb-893c-00531c7653b7 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.630852872Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59e97531-b508-41eb-893c-00531c7653b7 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.632098869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=329f934e-9437-48fc-945e-7335af4d3473 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.632527558Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117401632505407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136931,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=329f934e-9437-48fc-945e-7335af4d3473 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.633346230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31d1a468-959d-4f1c-afb6-e484ef3f5e65 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.633551240Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31d1a468-959d-4f1c-afb6-e484ef3f5e65 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:10:01 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:10:01.633761281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26e6ee81f646c72c850b70ecca35f7adee442aecb0b43b306d3cdeee6026f584,PodSandboxId:8345a5965eace8c7c7cc6f9373bab32a2a8314386042f84afb02313b2048d9b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116522347035725,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1d1888-a950-48d5-9b73-440e7556818b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2caca388dcae558e2cf45b546d63b95993ed6f874effdb03ad5e5911235916da,PodSandboxId:4f012f54328ca4f5304cbd68bdc8bbdfa10180a294a8b12faebe143bbe3f41ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116502259145777,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca7cb580-0d7c-401c-8d64-c5bb86760477,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b125b6cafad9798dcaaa4e59d74adde5911caa574d34924fdd4022bbe679a,PodSandboxId:6925fe331fb33829ee17d854417078bf77251255616425c7cf53183387396a23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766116498385943301,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dnfcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8ee66a7-b129-4499-aad8-a988ecea241c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9a2aa1cbfadbd847de6cbbc990670ed05c34778fbe25ce4972a4762b0e5a14,PodSandboxId:5cce2c4d9c72d6c8364f4acb895c868c282b9883caf7d7d2964785eff615dc54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766116491940005380,Labels:map[string]string{io.kuber
netes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zs4wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2212782c-32ab-4355-8dda-9117953b0223,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7628d157bc011d749b031c83d671751b03a44c981e27b8e1cea9a8de01cb72,PodSandboxId:8345a5965eace8c7c7cc6f9373bab32a2a8314386042f84afb02313b2048d9b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116491309986286,Labels:map[string]string{io.kubernetes.container.name: sto
rage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1d1888-a950-48d5-9b73-440e7556818b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7b7fbe883018669fcd7b491b5ffd6260c3c94de5b5212ff255578f1329f11cc,PodSandboxId:4983aae39c7cb6080ad85fdd5a206a99db267bc8a565800f67829f7fd83d31d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766116486801027255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e108fdf17cc8f56b9a0db5bf73d6cb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a59d170b8ca5d86fcf9375544c790fe7c5d51034aff8c9daaf79524ddd0f122,PodSandboxId:1b2fac58f6e514929f7ca9aaa298e393cfd7bc1493f5586e7ad68076fe6a45b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:176611
6486784784284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8cfd1777a85573c752d6ea56606951,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47bef8ab5b6819f147cc9370436632c70fad252c4b7755f0240da10cf676d8a,PodSandboxId:1cf243a559ad56b72baae71b02b1497fc03615938731b5ebfe2ac4fe8f830c3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766116486737925173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55409b5058745059effbb614ceaaeabc,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1efb3e359c44d7879c1ca0d876346921845fcdd8a36efd9bb646db8301f8f5f,PodSandboxId:a1b44c869cec4ea46ff8e37c6fcca45502228a99c6b37d1f8a867f6427e6bb89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:
5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766116486704823215,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d023f6143e50cea3e3be2ba5b8c07a72,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31d1a468-959d-4f1c-afb6-e484ef3f5e65 name=/runtime.v
1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	26e6ee81f646c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 minutes ago      Running             storage-provisioner       3                   8345a5965eace       storage-provisioner                                    kube-system
	2caca388dcae5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   14 minutes ago      Running             busybox                   1                   4f012f54328ca       busybox                                                default
	ca1b125b6cafa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      15 minutes ago      Running             coredns                   1                   6925fe331fb33       coredns-66bc5c9577-dnfcc                               kube-system
	1a9a2aa1cbfad       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      15 minutes ago      Running             kube-proxy                1                   5cce2c4d9c72d       kube-proxy-zs4wg                                       kube-system
	5e7628d157bc0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      15 minutes ago      Exited              storage-provisioner       2                   8345a5965eace       storage-provisioner                                    kube-system
	a7b7fbe883018       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      15 minutes ago      Running             etcd                      1                   4983aae39c7cb       etcd-default-k8s-diff-port-168174                      kube-system
	5a59d170b8ca5       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      15 minutes ago      Running             kube-apiserver            1                   1b2fac58f6e51       kube-apiserver-default-k8s-diff-port-168174            kube-system
	c47bef8ab5b68       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      15 minutes ago      Running             kube-scheduler            1                   1cf243a559ad5       kube-scheduler-default-k8s-diff-port-168174            kube-system
	f1efb3e359c44       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      15 minutes ago      Running             kube-controller-manager   1                   a1b44c869cec4       kube-controller-manager-default-k8s-diff-port-168174   kube-system
	
	
	==> coredns [ca1b125b6cafad9798dcaaa4e59d74adde5911caa574d34924fdd4022bbe679a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57723 - 29016 "HINFO IN 3001237849225172108.7414532178602150098. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02497374s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-168174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-168174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=default-k8s-diff-port-168174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_51_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:51:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-168174
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 04:09:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 04:05:01 +0000   Fri, 19 Dec 2025 03:51:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 04:05:01 +0000   Fri, 19 Dec 2025 03:51:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 04:05:01 +0000   Fri, 19 Dec 2025 03:51:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 04:05:01 +0000   Fri, 19 Dec 2025 03:54:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.68
	  Hostname:    default-k8s-diff-port-168174
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 5503b0a81398475db625563c5bc2d168
	  System UUID:                5503b0a8-1398-475d-b625-563c5bc2d168
	  Boot ID:                    ec7dc5a0-c588-4c8b-b9bc-28aeb7330fb9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-66bc5c9577-dnfcc                                 100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     17m
	  kube-system                 etcd-default-k8s-diff-port-168174                        100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         18m
	  kube-system                 kube-apiserver-default-k8s-diff-port-168174              250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-168174     200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-zs4wg                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-default-k8s-diff-port-168174              100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 metrics-server-746fcd58dc-xjkbx                          100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         17m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kubernetes-dashboard        kubernetes-dashboard-api-7ddd685bb4-kxd2m                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    15m
	  kubernetes-dashboard        kubernetes-dashboard-auth-548df69c79-p9fml               100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    15m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-rjxnf                0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh    100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    15m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-68g4g                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1250m (62%)   1 (50%)
	  memory             1170Mi (39%)  1770Mi (59%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   NodeHasSufficientMemory  18m                kubelet          Node default-k8s-diff-port-168174 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    18m                kubelet          Node default-k8s-diff-port-168174 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m                kubelet          Node default-k8s-diff-port-168174 status is now: NodeHasSufficientPID
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   NodeReady                18m                kubelet          Node default-k8s-diff-port-168174 status is now: NodeReady
	  Normal   RegisteredNode           18m                node-controller  Node default-k8s-diff-port-168174 event: Registered Node default-k8s-diff-port-168174 in Controller
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-168174 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-168174 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node default-k8s-diff-port-168174 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 15m                kubelet          Node default-k8s-diff-port-168174 has been rebooted, boot id: ec7dc5a0-c588-4c8b-b9bc-28aeb7330fb9
	  Normal   RegisteredNode           15m                node-controller  Node default-k8s-diff-port-168174 event: Registered Node default-k8s-diff-port-168174 in Controller
	
	
	==> dmesg <==
	[Dec19 03:54] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000203] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.775225] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088691] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.100726] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.380364] kauditd_printk_skb: 168 callbacks suppressed
	[  +0.000108] kauditd_printk_skb: 182 callbacks suppressed
	[Dec19 03:55] kauditd_printk_skb: 291 callbacks suppressed
	[ +12.029718] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [a7b7fbe883018669fcd7b491b5ffd6260c3c94de5b5212ff255578f1329f11cc] <==
	{"level":"warn","ts":"2025-12-19T03:54:49.016067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:49.030069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:49.054470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:49.072343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:49.093762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:49.148269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.499220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.513165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.534618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.546175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.563928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.574673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.591079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.604034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.617917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.630551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.647724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.659369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35306","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T04:04:48.130463Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1035}
	{"level":"info","ts":"2025-12-19T04:04:48.155264Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1035,"took":"24.333192ms","hash":1277573185,"current-db-size-bytes":4239360,"current-db-size":"4.2 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-12-19T04:04:48.155315Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1277573185,"revision":1035,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T04:09:48.136857Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1380}
	{"level":"info","ts":"2025-12-19T04:09:48.141957Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1380,"took":"4.271902ms","hash":14277058,"current-db-size-bytes":4239360,"current-db-size":"4.2 MB","current-db-size-in-use-bytes":2764800,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2025-12-19T04:09:48.141986Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":14277058,"revision":1380,"compact-revision":1035}
	{"level":"info","ts":"2025-12-19T04:09:54.218147Z","caller":"traceutil/trace.go:172","msg":"trace[1078571965] transaction","detail":"{read_only:false; response_revision:1801; number_of_response:1; }","duration":"129.558788ms","start":"2025-12-19T04:09:54.088540Z","end":"2025-12-19T04:09:54.218099Z","steps":["trace[1078571965] 'process raft request'  (duration: 129.368425ms)"],"step_count":1}
	
	
	==> kernel <==
	 04:10:01 up 15 min,  0 users,  load average: 0.08, 0.14, 0.15
	Linux default-k8s-diff-port-168174 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [5a59d170b8ca5d86fcf9375544c790fe7c5d51034aff8c9daaf79524ddd0f122] <==
	E1219 04:05:51.068377       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 04:05:51.068387       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:07:51.066674       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:07:51.066782       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 04:07:51.066801       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:07:51.069174       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:07:51.069213       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 04:07:51.069230       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:09:50.071909       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:09:50.073066       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1219 04:09:51.073387       1 handler_proxy.go:99] no RequestInfo found in the context
	W1219 04:09:51.073498       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:09:51.073757       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 04:09:51.073830       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1219 04:09:51.073654       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 04:09:51.075033       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f1efb3e359c44d7879c1ca0d876346921845fcdd8a36efd9bb646db8301f8f5f] <==
	I1219 04:03:54.871588       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:04:24.832604       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:04:24.880868       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:04:54.837841       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:04:54.890729       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:05:24.843834       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:05:24.903197       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:05:54.849440       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:05:54.913365       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:06:24.854616       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:06:24.923915       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:06:54.860301       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:06:54.931933       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:07:24.867596       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:07:24.941375       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:07:54.874302       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:07:54.957205       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:08:24.880187       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:08:24.966080       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:08:54.885520       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:08:54.975968       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:09:24.890792       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:09:24.985686       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:09:54.896713       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:09:54.994943       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [1a9a2aa1cbfadbd847de6cbbc990670ed05c34778fbe25ce4972a4762b0e5a14] <==
	I1219 03:54:52.411745       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:54:52.513125       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:54:52.513181       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.68"]
	E1219 03:54:52.513246       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:54:52.577645       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:54:52.577754       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:54:52.577785       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:54:52.614026       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:54:52.614219       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:54:52.614230       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:54:52.622559       1 config.go:200] "Starting service config controller"
	I1219 03:54:52.623023       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:54:52.623060       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:54:52.623068       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:54:52.623084       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:54:52.623089       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:54:52.629960       1 config.go:309] "Starting node config controller"
	I1219 03:54:52.629977       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:54:52.629985       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:54:52.724081       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:54:52.724118       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:54:52.724180       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c47bef8ab5b6819f147cc9370436632c70fad252c4b7755f0240da10cf676d8a] <==
	I1219 03:54:50.124356       1 serving.go:386] Generated self-signed cert in-memory
	I1219 03:54:50.387620       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:54:50.387644       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:54:50.393918       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:54:50.394016       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1219 03:54:50.394029       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1219 03:54:50.394045       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:54:50.401093       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:54:50.401132       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:54:50.401238       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:54:50.401261       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:54:50.494477       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1219 03:54:50.501889       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:54:50.501986       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 04:09:29 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:29.326661    1242 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest 3.9 in docker.io/library/kong: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kong:3.9"
	Dec 19 04:09:29 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:29.326901    1242 kuberuntime_manager.go:1449] "Unhandled Error" err="init container clear-stale-pid start failed in pod kubernetes-dashboard-kong-9849c64bd-rjxnf_kubernetes-dashboard(e2a6d304-c063-4022-9046-9ad88d13e776): ErrImagePull: reading manifest 3.9 in docker.io/library/kong: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 19 04:09:29 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:29.326953    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ErrImagePull: \"reading manifest 3.9 in docker.io/library/kong: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-rjxnf" podUID="e2a6d304-c063-4022-9046-9ad88d13e776"
	Dec 19 04:09:29 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:29.336844    1242 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 04:09:29 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:29.337169    1242 kuberuntime_image.go:43] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 04:09:29 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:29.337334    1242 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-xjkbx_kube-system(c6e2f2b2-7b94-4ff2-85ba-e79d72b30655): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Dec 19 04:09:29 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:29.337370    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xjkbx" podUID="c6e2f2b2-7b94-4ff2-85ba-e79d72b30655"
	Dec 19 04:09:30 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:30.074264    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-548df69c79-p9fml" podUID="c0e6acd2-48c2-4841-b6f3-227a34007c9a"
	Dec 19 04:09:34 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:34.075954    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: reading manifest 1.7.0 in docker.io/kubernetesui/dashboard-web: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-68g4g" podUID="7c826d2d-f354-48c5-b794-0bcd08b8d69d"
	Dec 19 04:09:35 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:35.274337    1242 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117375273964703  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:09:35 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:35.274385    1242 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117375273964703  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:09:40 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:40.074666    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2 in docker.io/kubernetesui/dashboard-api: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-7ddd685bb4-kxd2m" podUID="6755fe02-aa19-47c7-84ac-fdbc589e9298"
	Dec 19 04:09:40 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:40.075058    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xjkbx" podUID="c6e2f2b2-7b94-4ff2-85ba-e79d72b30655"
	Dec 19 04:09:43 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:43.073684    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh" podUID="849bb739-c9a3-414f-8717-a34dddeafbbd"
	Dec 19 04:09:44 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:44.076176    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: reading manifest 3.9 in docker.io/library/kong: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-rjxnf" podUID="e2a6d304-c063-4022-9046-9ad88d13e776"
	Dec 19 04:09:45 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:45.276908    1242 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117385276222562  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:09:45 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:45.276936    1242 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117385276222562  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:09:46 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:46.074619    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: reading manifest 1.7.0 in docker.io/kubernetesui/dashboard-web: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-68g4g" podUID="7c826d2d-f354-48c5-b794-0bcd08b8d69d"
	Dec 19 04:09:54 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:54.076476    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xjkbx" podUID="c6e2f2b2-7b94-4ff2-85ba-e79d72b30655"
	Dec 19 04:09:54 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:54.076696    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2 in docker.io/kubernetesui/dashboard-api: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-7ddd685bb4-kxd2m" podUID="6755fe02-aa19-47c7-84ac-fdbc589e9298"
	Dec 19 04:09:55 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:55.074325    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: reading manifest 3.9 in docker.io/library/kong: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-rjxnf" podUID="e2a6d304-c063-4022-9046-9ad88d13e776"
	Dec 19 04:09:55 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:55.278835    1242 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117395278334399  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:09:55 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:55.278870    1242 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117395278334399  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:09:58 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:58.074046    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh" podUID="849bb739-c9a3-414f-8717-a34dddeafbbd"
	Dec 19 04:09:59 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:09:59.081527    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: reading manifest 1.7.0 in docker.io/kubernetesui/dashboard-web: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-68g4g" podUID="7c826d2d-f354-48c5-b794-0bcd08b8d69d"
	
	
	==> storage-provisioner [26e6ee81f646c72c850b70ecca35f7adee442aecb0b43b306d3cdeee6026f584] <==
	W1219 04:09:36.190931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:38.195303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:38.200921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:40.204658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:40.211356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:42.214927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:42.220650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:44.224300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:44.230894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:46.234579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:46.241720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:48.245591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:48.250649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:50.254728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:50.261535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:52.264335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:52.270207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:54.273715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:54.278838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:56.281769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:56.286105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:58.289480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:09:58.293903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:10:00.297304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:10:00.306734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [5e7628d157bc011d749b031c83d671751b03a44c981e27b8e1cea9a8de01cb72] <==
	I1219 03:54:51.390847       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:55:21.394375       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-168174 -n default-k8s-diff-port-168174
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-168174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-xjkbx kubernetes-dashboard-api-7ddd685bb4-kxd2m kubernetes-dashboard-auth-548df69c79-p9fml kubernetes-dashboard-kong-9849c64bd-rjxnf kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh kubernetes-dashboard-web-5c9f966b98-68g4g
helpers_test.go:283: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context default-k8s-diff-port-168174 describe pod metrics-server-746fcd58dc-xjkbx kubernetes-dashboard-api-7ddd685bb4-kxd2m kubernetes-dashboard-auth-548df69c79-p9fml kubernetes-dashboard-kong-9849c64bd-rjxnf kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh kubernetes-dashboard-web-5c9f966b98-68g4g
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-168174 describe pod metrics-server-746fcd58dc-xjkbx kubernetes-dashboard-api-7ddd685bb4-kxd2m kubernetes-dashboard-auth-548df69c79-p9fml kubernetes-dashboard-kong-9849c64bd-rjxnf kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh kubernetes-dashboard-web-5c9f966b98-68g4g: exit status 1 (66.191499ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-xjkbx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-api-7ddd685bb4-kxd2m" not found
	Error from server (NotFound): pods "kubernetes-dashboard-auth-548df69c79-p9fml" not found
	Error from server (NotFound): pods "kubernetes-dashboard-kong-9849c64bd-rjxnf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh" not found
	Error from server (NotFound): pods "kubernetes-dashboard-web-5c9f966b98-68g4g" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context default-k8s-diff-port-168174 describe pod metrics-server-746fcd58dc-xjkbx kubernetes-dashboard-api-7ddd685bb4-kxd2m kubernetes-dashboard-auth-548df69c79-p9fml kubernetes-dashboard-kong-9849c64bd-rjxnf kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh kubernetes-dashboard-web-5c9f966b98-68g4g: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 04:03:33.872513    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:04:18.512398    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:04:23.144028    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:04:24.605906    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:05:24.934433    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:05:45.208487    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:05:51.406867    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:06:53.474172    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:07:18.385793    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:07:49.625638    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:08:16.520255    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:08:33.872472    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:08:41.430088    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094166 -n old-k8s-version-094166
start_stop_delete_test.go:285: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-12-19 04:11:58.585280044 +0000 UTC m=+6421.623459356
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-094166 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-094166 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (61.200299ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "dashboard-metrics-scraper" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-094166 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094166 -n old-k8s-version-094166
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-094166 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-094166 logs -n 25: (1.429738253s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ ssh     │ -p bridge-542624 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo containerd config dump                                                                                                                                                                                                │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ ssh     │ -p bridge-542624 sudo crio config                                                                                                                                                                                                           │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ delete  │ -p bridge-542624                                                                                                                                                                                                                            │ bridge-542624                │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ delete  │ -p disable-driver-mounts-189846                                                                                                                                                                                                             │ disable-driver-mounts-189846 │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ start   │ -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-094166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ stop    │ -p old-k8s-version-094166 --alsologtostderr -v=3                                                                                                                                                                                            │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p no-preload-298059 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ stop    │ -p no-preload-298059 --alsologtostderr -v=3                                                                                                                                                                                                 │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p embed-certs-244717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                    │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:52 UTC │
	│ stop    │ -p embed-certs-244717 --alsologtostderr -v=3                                                                                                                                                                                                │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-168174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                          │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:52 UTC │
	│ stop    │ -p default-k8s-diff-port-168174 --alsologtostderr -v=3                                                                                                                                                                                      │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-094166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                           │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p old-k8s-version-094166 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable dashboard -p no-preload-298059 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p no-preload-298059 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-244717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p embed-certs-244717 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-168174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                     │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ start   │ -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 04:01 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:54:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:54:19.163618   56230 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:54:19.163755   56230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:54:19.163766   56230 out.go:374] Setting ErrFile to fd 2...
	I1219 03:54:19.163773   56230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:54:19.164086   56230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:54:19.164710   56230 out.go:368] Setting JSON to false
	I1219 03:54:19.166058   56230 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5803,"bootTime":1766110656,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:54:19.166138   56230 start.go:143] virtualization: kvm guest
	I1219 03:54:19.167819   56230 out.go:179] * [default-k8s-diff-port-168174] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:54:19.168806   56230 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:54:19.168798   56230 notify.go:221] Checking for updates...
	I1219 03:54:19.170649   56230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:54:19.171718   56230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:19.172800   56230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 03:54:19.173680   56230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:54:19.174607   56230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:54:19.176155   56230 config.go:182] Loaded profile config "default-k8s-diff-port-168174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:19.176843   56230 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:54:19.221795   56230 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 03:54:19.222673   56230 start.go:309] selected driver: kvm2
	I1219 03:54:19.222686   56230 start.go:928] validating driver "kvm2" against &{Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:19.222787   56230 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:54:19.223700   56230 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:54:19.223731   56230 cni.go:84] Creating CNI manager for ""
	I1219 03:54:19.223785   56230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:19.223821   56230 start.go:353] cluster config:
	{Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:19.223901   56230 iso.go:125] acquiring lock: {Name:mk42290af04a74a4dddf27fa33aac85c9bdccfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:54:19.225058   56230 out.go:179] * Starting "default-k8s-diff-port-168174" primary control-plane node in "default-k8s-diff-port-168174" cluster
	I1219 03:54:19.225891   56230 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:54:19.225925   56230 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:54:19.225937   56230 cache.go:65] Caching tarball of preloaded images
	I1219 03:54:19.226014   56230 preload.go:238] Found /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:54:19.226025   56230 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:54:19.226103   56230 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/config.json ...
	I1219 03:54:19.226379   56230 start.go:360] acquireMachinesLock for default-k8s-diff-port-168174: {Name:mk229398d29442d4d52885bbac963e5004bfbfba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:54:19.226434   56230 start.go:364] duration metric: took 34.138µs to acquireMachinesLock for "default-k8s-diff-port-168174"
	I1219 03:54:19.226446   56230 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:54:19.226451   56230 fix.go:54] fixHost starting: 
	I1219 03:54:19.228163   56230 fix.go:112] recreateIfNeeded on default-k8s-diff-port-168174: state=Stopped err=<nil>
	W1219 03:54:19.228180   56230 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:54:16.533332   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:17.359209   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:17.532886   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:18.033640   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:18.533499   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:19.033373   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:19.533624   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:20.033318   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:20.532932   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:21.032204   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:18.384127   55957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:18.420807   55957 api_server.go:72] duration metric: took 1.537508247s to wait for apiserver process to appear ...
	I1219 03:54:18.420840   55957 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:18.420862   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.071318   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:21.071349   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:21.071368   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.151121   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:21.151151   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:21.421632   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.426745   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:21.426773   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:21.921398   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:21.927340   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:21.927368   55957 api_server.go:103] status: https://192.168.83.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:22.420988   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:22.428236   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 200:
	ok
	I1219 03:54:22.439161   55957 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:22.439190   55957 api_server.go:131] duration metric: took 4.018341977s to wait for apiserver health ...
	I1219 03:54:22.439202   55957 cni.go:84] Creating CNI manager for ""
	I1219 03:54:22.439211   55957 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:22.440712   55957 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:54:22.442679   55957 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:54:22.464908   55957 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:54:22.524765   55957 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:22.531030   55957 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:22.531082   55957 system_pods.go:61] "coredns-66bc5c9577-9ptrv" [22226444-faa6-420d-a862-1ef0441a80e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:22.531096   55957 system_pods.go:61] "etcd-embed-certs-244717" [24fc3b7f-cbce-4591-9d18-9859136e38c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:22.531109   55957 system_pods.go:61] "kube-apiserver-embed-certs-244717" [67d6bc2f-88f4-449a-a029-dc66d6ad1468] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:22.531117   55957 system_pods.go:61] "kube-controller-manager-embed-certs-244717" [f2be3fb7-4440-43a3-96ec-b2b241393a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:22.531126   55957 system_pods.go:61] "kube-proxy-p8gvm" [283607b2-9e6c-44f4-9c9d-7d713c71fb8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:22.531135   55957 system_pods.go:61] "kube-scheduler-embed-certs-244717" [243ff9cd-448c-4e21-98aa-032de9f329fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:22.531151   55957 system_pods.go:61] "metrics-server-746fcd58dc-x74d4" [e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:22.531159   55957 system_pods.go:61] "storage-provisioner" [99ff9c60-2f30-457a-8cb5-e030eb64a58e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:54:22.531169   55957 system_pods.go:74] duration metric: took 6.378453ms to wait for pod list to return data ...
	I1219 03:54:22.531184   55957 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:22.538334   55957 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:22.538361   55957 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:22.538378   55957 node_conditions.go:105] duration metric: took 7.188571ms to run NodePressure ...
	I1219 03:54:22.538434   55957 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:22.838171   55957 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1219 03:54:22.841979   55957 kubeadm.go:744] kubelet initialised
	I1219 03:54:22.842009   55957 kubeadm.go:745] duration metric: took 3.812738ms waiting for restarted kubelet to initialise ...
	I1219 03:54:22.842027   55957 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:54:22.858280   55957 ops.go:34] apiserver oom_adj: -16
	I1219 03:54:22.858296   55957 kubeadm.go:602] duration metric: took 8.274282939s to restartPrimaryControlPlane
	I1219 03:54:22.858304   55957 kubeadm.go:403] duration metric: took 8.332738451s to StartCluster
	I1219 03:54:22.858319   55957 settings.go:142] acquiring lock: {Name:mkb131693a40b9aa50c302192272518c3561c861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:22.858398   55957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:22.860091   55957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:22.860306   55957 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.54 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:54:22.860397   55957 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:54:22.860520   55957 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-244717"
	I1219 03:54:22.860540   55957 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-244717"
	W1219 03:54:22.860553   55957 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:54:22.860556   55957 addons.go:70] Setting default-storageclass=true in profile "embed-certs-244717"
	I1219 03:54:22.860588   55957 config.go:182] Loaded profile config "embed-certs-244717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:22.860638   55957 addons.go:70] Setting dashboard=true in profile "embed-certs-244717"
	I1219 03:54:22.860664   55957 addons.go:239] Setting addon dashboard=true in "embed-certs-244717"
	W1219 03:54:22.860674   55957 addons.go:248] addon dashboard should already be in state true
	I1219 03:54:22.860596   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.860698   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.860603   55957 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-244717"
	I1219 03:54:22.860613   55957 addons.go:70] Setting metrics-server=true in profile "embed-certs-244717"
	I1219 03:54:22.861202   55957 addons.go:239] Setting addon metrics-server=true in "embed-certs-244717"
	W1219 03:54:22.861219   55957 addons.go:248] addon metrics-server should already be in state true
	I1219 03:54:22.861243   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.861875   55957 out.go:179] * Verifying Kubernetes components...
	I1219 03:54:22.862820   55957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:22.863427   55957 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:22.863444   55957 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:54:22.864891   55957 addons.go:239] Setting addon default-storageclass=true in "embed-certs-244717"
	W1219 03:54:22.864914   55957 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:54:22.864935   55957 host.go:66] Checking if "embed-certs-244717" exists ...
	I1219 03:54:22.866702   55957 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:22.866730   55957 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:54:22.866703   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.866913   55957 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:54:22.867359   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.867391   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.867616   55957 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:54:22.867638   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:22.868328   55957 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:22.868344   55957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:54:22.868968   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:54:22.869019   55957 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:54:22.870937   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.871717   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.871748   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.871986   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:22.872790   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873111   55957 main.go:144] libmachine: domain embed-certs-244717 has defined MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873212   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.873235   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873423   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:22.873635   55957 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c1:1a:c3", ip: ""} in network mk-embed-certs-244717: {Iface:virbr5 ExpiryTime:2025-12-19 04:54:05 +0000 UTC Type:0 Mac:52:54:00:c1:1a:c3 Iaid: IPaddr:192.168.83.54 Prefix:24 Hostname:embed-certs-244717 Clientid:01:52:54:00:c1:1a:c3}
	I1219 03:54:22.873666   55957 main.go:144] libmachine: domain embed-certs-244717 has defined IP address 192.168.83.54 and MAC address 52:54:00:c1:1a:c3 in network mk-embed-certs-244717
	I1219 03:54:22.873832   55957 sshutil.go:53] new ssh client: &{IP:192.168.83.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/embed-certs-244717/id_rsa Username:docker}
	I1219 03:54:23.104462   55957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:23.139781   55957 node_ready.go:35] waiting up to 6m0s for node "embed-certs-244717" to be "Ready" ...
	I1219 03:54:19.229464   56230 out.go:252] * Restarting existing kvm2 VM for "default-k8s-diff-port-168174" ...
	I1219 03:54:19.229501   56230 main.go:144] libmachine: starting domain...
	I1219 03:54:19.229509   56230 main.go:144] libmachine: ensuring networks are active...
	I1219 03:54:19.230233   56230 main.go:144] libmachine: Ensuring network default is active
	I1219 03:54:19.230721   56230 main.go:144] libmachine: Ensuring network mk-default-k8s-diff-port-168174 is active
	I1219 03:54:19.231248   56230 main.go:144] libmachine: getting domain XML...
	I1219 03:54:19.232369   56230 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>default-k8s-diff-port-168174</name>
	  <uuid>5503b0a8-1398-475d-b625-563c5bc2d168</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/default-k8s-diff-port-168174.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:d9:97:a2'/>
	      <source network='mk-default-k8s-diff-port-168174'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:3f:9e:c8'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 03:54:20.662520   56230 main.go:144] libmachine: waiting for domain to start...
	I1219 03:54:20.663943   56230 main.go:144] libmachine: domain is now running
	I1219 03:54:20.663969   56230 main.go:144] libmachine: waiting for IP...
	I1219 03:54:20.664770   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.665467   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has current primary IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.665481   56230 main.go:144] libmachine: found domain IP: 192.168.50.68
	I1219 03:54:20.665486   56230 main.go:144] libmachine: reserving static IP address...
	I1219 03:54:20.665943   56230 main.go:144] libmachine: found host DHCP lease matching {name: "default-k8s-diff-port-168174", mac: "52:54:00:d9:97:a2", ip: "192.168.50.68"} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:51:35 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:20.665989   56230 main.go:144] libmachine: skip adding static IP to network mk-default-k8s-diff-port-168174 - found existing host DHCP lease matching {name: "default-k8s-diff-port-168174", mac: "52:54:00:d9:97:a2", ip: "192.168.50.68"}
	I1219 03:54:20.666003   56230 main.go:144] libmachine: reserved static IP address 192.168.50.68 for domain default-k8s-diff-port-168174
	I1219 03:54:20.666019   56230 main.go:144] libmachine: waiting for SSH...
	I1219 03:54:20.666027   56230 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 03:54:20.668799   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.669225   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:51:35 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:20.669267   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:20.669495   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:20.669789   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:20.669805   56230 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 03:54:23.725788   56230 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.50.68:22: connect: no route to host
	I1219 03:54:21.532614   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:22.032788   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:22.532959   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:23.032773   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:23.531977   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:24.033500   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:24.532177   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:25.033441   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:25.533482   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:26.031758   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:23.198551   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:23.404667   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:23.420466   55957 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:54:23.445604   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:54:23.445631   55957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:54:23.525300   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:54:23.525326   55957 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:54:23.593759   55957 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:23.593784   55957 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:54:23.645141   55957 node_ready.go:49] node "embed-certs-244717" is "Ready"
	I1219 03:54:23.645171   55957 node_ready.go:38] duration metric: took 505.352434ms for node "embed-certs-244717" to be "Ready" ...
	I1219 03:54:23.645183   55957 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:54:23.645241   55957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:23.652800   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:24.781529   55957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.376827148s)
	I1219 03:54:24.781591   55957 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.361072264s)
	I1219 03:54:24.781616   55957 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.136359787s)
	I1219 03:54:24.781638   55957 api_server.go:72] duration metric: took 1.9213054s to wait for apiserver process to appear ...
	I1219 03:54:24.781645   55957 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:24.781662   55957 api_server.go:253] Checking apiserver healthz at https://192.168.83.54:8443/healthz ...
	I1219 03:54:24.781671   55957 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:54:24.791019   55957 api_server.go:279] https://192.168.83.54:8443/healthz returned 200:
	ok
	I1219 03:54:24.791945   55957 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:24.791970   55957 api_server.go:131] duration metric: took 10.31791ms to wait for apiserver health ...
	I1219 03:54:24.791980   55957 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:24.795539   55957 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:24.795599   55957 system_pods.go:61] "coredns-66bc5c9577-9ptrv" [22226444-faa6-420d-a862-1ef0441a80e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:24.795612   55957 system_pods.go:61] "etcd-embed-certs-244717" [24fc3b7f-cbce-4591-9d18-9859136e38c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:24.795627   55957 system_pods.go:61] "kube-apiserver-embed-certs-244717" [67d6bc2f-88f4-449a-a029-dc66d6ad1468] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:24.795638   55957 system_pods.go:61] "kube-controller-manager-embed-certs-244717" [f2be3fb7-4440-43a3-96ec-b2b241393a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:24.795644   55957 system_pods.go:61] "kube-proxy-p8gvm" [283607b2-9e6c-44f4-9c9d-7d713c71fb8c] Running
	I1219 03:54:24.795655   55957 system_pods.go:61] "kube-scheduler-embed-certs-244717" [243ff9cd-448c-4e21-98aa-032de9f329fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:24.795666   55957 system_pods.go:61] "metrics-server-746fcd58dc-x74d4" [e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:24.795671   55957 system_pods.go:61] "storage-provisioner" [99ff9c60-2f30-457a-8cb5-e030eb64a58e] Running
	I1219 03:54:24.795683   55957 system_pods.go:74] duration metric: took 3.696303ms to wait for pod list to return data ...
	I1219 03:54:24.795694   55957 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:54:24.797860   55957 default_sa.go:45] found service account: "default"
	I1219 03:54:24.797884   55957 default_sa.go:55] duration metric: took 2.181869ms for default service account to be created ...
	I1219 03:54:24.797895   55957 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:54:24.800212   55957 system_pods.go:86] 8 kube-system pods found
	I1219 03:54:24.800242   55957 system_pods.go:89] "coredns-66bc5c9577-9ptrv" [22226444-faa6-420d-a862-1ef0441a80e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:24.800255   55957 system_pods.go:89] "etcd-embed-certs-244717" [24fc3b7f-cbce-4591-9d18-9859136e38c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:24.800267   55957 system_pods.go:89] "kube-apiserver-embed-certs-244717" [67d6bc2f-88f4-449a-a029-dc66d6ad1468] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:24.800277   55957 system_pods.go:89] "kube-controller-manager-embed-certs-244717" [f2be3fb7-4440-43a3-96ec-b2b241393a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:24.800283   55957 system_pods.go:89] "kube-proxy-p8gvm" [283607b2-9e6c-44f4-9c9d-7d713c71fb8c] Running
	I1219 03:54:24.800291   55957 system_pods.go:89] "kube-scheduler-embed-certs-244717" [243ff9cd-448c-4e21-98aa-032de9f329fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:24.800300   55957 system_pods.go:89] "metrics-server-746fcd58dc-x74d4" [e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:24.800307   55957 system_pods.go:89] "storage-provisioner" [99ff9c60-2f30-457a-8cb5-e030eb64a58e] Running
	I1219 03:54:24.800317   55957 system_pods.go:126] duration metric: took 2.415918ms to wait for k8s-apps to be running ...
	I1219 03:54:24.800326   55957 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:54:24.800389   55957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:54:24.901954   55957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.249113047s)
	I1219 03:54:24.901997   55957 addons.go:500] Verifying addon metrics-server=true in "embed-certs-244717"
	I1219 03:54:24.902043   55957 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:54:24.902053   55957 system_svc.go:56] duration metric: took 101.72157ms WaitForService to wait for kubelet
	I1219 03:54:24.902083   55957 kubeadm.go:587] duration metric: took 2.041739112s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:54:24.902106   55957 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:24.912597   55957 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:24.912623   55957 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:24.912638   55957 node_conditions.go:105] duration metric: took 10.525951ms to run NodePressure ...
	I1219 03:54:24.912652   55957 start.go:242] waiting for startup goroutines ...
	I1219 03:54:25.801998   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:54:29.507152   55957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.70510669s)
	I1219 03:54:29.507259   55957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:29.992247   55957 addons.go:500] Verifying addon dashboard=true in "embed-certs-244717"
	I1219 03:54:29.995517   55957 out.go:179] * Verifying dashboard addon...
	I1219 03:54:26.531479   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:27.031454   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:27.532215   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:28.032964   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:28.532268   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.032253   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.533154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:30.032379   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:30.532853   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:31.032643   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.998065   55957 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:54:30.003541   55957 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:54:30.003561   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:30.510371   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:31.003319   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:31.502854   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.002809   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.503083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.001709   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:29.805953   56230 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.50.68:22: connect: no route to host
	I1219 03:54:32.806901   56230 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.50.68:22: connect: connection refused
	I1219 03:54:31.531396   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.033946   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:32.532063   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.033088   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.532601   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.032154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.532734   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.031403   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.532231   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:36.031798   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:33.501421   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.001823   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:34.501944   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.001242   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.502033   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:36.001834   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:36.503279   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.002832   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.501859   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.002218   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:35.914133   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:54:35.917629   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.918062   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:35.918084   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.918331   56230 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/config.json ...
	I1219 03:54:35.918603   56230 machine.go:94] provisionDockerMachine start ...
	I1219 03:54:35.921009   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.921341   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:35.921380   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:35.921581   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:35.921797   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:35.921810   56230 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:54:36.027619   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 03:54:36.027644   56230 buildroot.go:166] provisioning hostname "default-k8s-diff-port-168174"
	I1219 03:54:36.030973   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.031540   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.031597   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.031855   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:36.032105   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:36.032121   56230 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-168174 && echo "default-k8s-diff-port-168174" | sudo tee /etc/hostname
	I1219 03:54:36.154920   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-168174
	
	I1219 03:54:36.157818   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.158270   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.158298   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.158481   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:36.158705   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:36.158721   56230 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-168174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-168174/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-168174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:54:36.278763   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:54:36.278793   56230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5010/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5010/.minikube}
	I1219 03:54:36.278815   56230 buildroot.go:174] setting up certificates
	I1219 03:54:36.278825   56230 provision.go:84] configureAuth start
	I1219 03:54:36.282034   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.282595   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.282631   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.285039   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.285396   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.285421   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.285558   56230 provision.go:143] copyHostCerts
	I1219 03:54:36.285634   56230 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem, removing ...
	I1219 03:54:36.285655   56230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem
	I1219 03:54:36.285732   56230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem (1082 bytes)
	I1219 03:54:36.285873   56230 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem, removing ...
	I1219 03:54:36.285889   56230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem
	I1219 03:54:36.285939   56230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem (1123 bytes)
	I1219 03:54:36.286034   56230 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem, removing ...
	I1219 03:54:36.286044   56230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem
	I1219 03:54:36.286086   56230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem (1679 bytes)
	I1219 03:54:36.286187   56230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-168174 san=[127.0.0.1 192.168.50.68 default-k8s-diff-port-168174 localhost minikube]
	I1219 03:54:36.425832   56230 provision.go:177] copyRemoteCerts
	I1219 03:54:36.425892   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:54:36.428255   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.428656   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.428686   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.428839   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:36.519020   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 03:54:36.558591   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:54:36.592448   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:54:36.618754   56230 provision.go:87] duration metric: took 339.918165ms to configureAuth
	I1219 03:54:36.618782   56230 buildroot.go:189] setting minikube options for container-runtime
	I1219 03:54:36.618965   56230 config.go:182] Loaded profile config "default-k8s-diff-port-168174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:36.622080   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.622643   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.622690   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.622932   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:36.623146   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:36.623170   56230 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:54:36.870072   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:54:36.870099   56230 machine.go:97] duration metric: took 951.477635ms to provisionDockerMachine
	I1219 03:54:36.870113   56230 start.go:293] postStartSetup for "default-k8s-diff-port-168174" (driver="kvm2")
	I1219 03:54:36.870125   56230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:54:36.870211   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:54:36.873360   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.873824   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:36.873854   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:36.873997   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:36.957455   56230 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:54:36.962098   56230 info.go:137] Remote host: Buildroot 2025.02
	I1219 03:54:36.962123   56230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/addons for local assets ...
	I1219 03:54:36.962187   56230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/files for local assets ...
	I1219 03:54:36.962258   56230 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem -> 89372.pem in /etc/ssl/certs
	I1219 03:54:36.962365   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:54:36.973208   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /etc/ssl/certs/89372.pem (1708 bytes)
	I1219 03:54:37.001535   56230 start.go:296] duration metric: took 131.409863ms for postStartSetup
	I1219 03:54:37.001590   56230 fix.go:56] duration metric: took 17.775113489s for fixHost
	I1219 03:54:37.004880   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.005287   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.005312   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.005528   56230 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:37.005820   56230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.68 22 <nil> <nil>}
	I1219 03:54:37.005839   56230 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 03:54:37.113597   56230 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766116477.079572846
	
	I1219 03:54:37.113621   56230 fix.go:216] guest clock: 1766116477.079572846
	I1219 03:54:37.113630   56230 fix.go:229] Guest: 2025-12-19 03:54:37.079572846 +0000 UTC Remote: 2025-12-19 03:54:37.001596336 +0000 UTC m=+17.891500693 (delta=77.97651ms)
	I1219 03:54:37.113645   56230 fix.go:200] guest clock delta is within tolerance: 77.97651ms
	I1219 03:54:37.113651   56230 start.go:83] releasing machines lock for "default-k8s-diff-port-168174", held for 17.887209269s
	I1219 03:54:37.116322   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.116867   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.116898   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.117549   56230 ssh_runner.go:195] Run: cat /version.json
	I1219 03:54:37.117645   56230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:54:37.121299   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.121532   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.121841   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.121885   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.122114   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:37.122168   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:37.122203   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:37.122439   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:37.200188   56230 ssh_runner.go:195] Run: systemctl --version
	I1219 03:54:37.236006   56230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:54:37.382400   56230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:54:37.391093   56230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:54:37.391172   56230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:54:37.412549   56230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:54:37.412595   56230 start.go:496] detecting cgroup driver to use...
	I1219 03:54:37.412701   56230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:54:37.432292   56230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:54:37.448705   56230 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:54:37.448757   56230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:54:37.464885   56230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:54:37.488524   56230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:54:37.648374   56230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:54:37.863271   56230 docker.go:234] disabling docker service ...
	I1219 03:54:37.863333   56230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:54:37.880285   56230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:54:37.895631   56230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:54:38.053642   56230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:54:38.210829   56230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:54:38.227130   56230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:54:38.248699   56230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:54:38.248763   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.260875   56230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1219 03:54:38.260939   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.273032   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.284839   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.296706   56230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:54:38.309100   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.320373   56230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.343213   56230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:54:38.355251   56230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:54:38.366693   56230 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 03:54:38.366745   56230 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 03:54:38.386325   56230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:54:38.397641   56230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:38.542778   56230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:54:38.656266   56230 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:54:38.656354   56230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:54:38.662225   56230 start.go:564] Will wait 60s for crictl version
	I1219 03:54:38.662286   56230 ssh_runner.go:195] Run: which crictl
	I1219 03:54:38.666072   56230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 03:54:38.702242   56230 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1219 03:54:38.702324   56230 ssh_runner.go:195] Run: crio --version
	I1219 03:54:38.730733   56230 ssh_runner.go:195] Run: crio --version
	I1219 03:54:38.760806   56230 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1219 03:54:38.764622   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:38.765017   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:38.765041   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:38.765207   56230 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1219 03:54:38.769555   56230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:54:38.784218   56230 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:54:38.784318   56230 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:54:38.784389   56230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:38.817654   56230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1219 03:54:38.817721   56230 ssh_runner.go:195] Run: which lz4
	I1219 03:54:38.821795   56230 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1219 03:54:38.826295   56230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1219 03:54:38.826327   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
	I1219 03:54:36.531538   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.031367   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:37.531677   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.031134   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.532312   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.032552   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.532678   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.031267   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.531858   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:41.032379   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:38.502453   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.002949   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:39.502152   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.002580   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.501440   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:41.002612   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:41.501822   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.002247   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.502196   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.002641   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:40.045060   56230 crio.go:462] duration metric: took 1.223302426s to copy over tarball
	I1219 03:54:40.045121   56230 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1219 03:54:41.702628   56230 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.657483082s)
	I1219 03:54:41.702653   56230 crio.go:469] duration metric: took 1.657571319s to extract the tarball
	I1219 03:54:41.702661   56230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1219 03:54:41.742396   56230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:41.778250   56230 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:54:41.778274   56230 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:54:41.778281   56230 kubeadm.go:935] updating node { 192.168.50.68 8444 v1.34.3 crio true true} ...
	I1219 03:54:41.778393   56230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-168174 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:54:41.778466   56230 ssh_runner.go:195] Run: crio config
	I1219 03:54:41.824084   56230 cni.go:84] Creating CNI manager for ""
	I1219 03:54:41.824114   56230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:41.824134   56230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:54:41.824161   56230 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.68 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-168174 NodeName:default-k8s-diff-port-168174 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:54:41.824332   56230 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.68
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-168174"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.68"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.68"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:54:41.824436   56230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:54:41.838181   56230 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:54:41.838263   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:54:41.850122   56230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1219 03:54:41.871647   56230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:54:41.891031   56230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1219 03:54:41.910970   56230 ssh_runner.go:195] Run: grep 192.168.50.68	control-plane.minikube.internal$ /etc/hosts
	I1219 03:54:41.915265   56230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.68	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:54:41.929042   56230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:42.077837   56230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:42.111492   56230 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174 for IP: 192.168.50.68
	I1219 03:54:42.111515   56230 certs.go:195] generating shared ca certs ...
	I1219 03:54:42.111529   56230 certs.go:227] acquiring lock for ca certs: {Name:mk433b742ffd55233764aebe3c6f298e0d1f02ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:42.111713   56230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key
	I1219 03:54:42.111782   56230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key
	I1219 03:54:42.111804   56230 certs.go:257] generating profile certs ...
	I1219 03:54:42.111942   56230 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/client.key
	I1219 03:54:42.112027   56230 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/apiserver.key.ed8a7a08
	I1219 03:54:42.112078   56230 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/proxy-client.key
	I1219 03:54:42.112201   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem (1338 bytes)
	W1219 03:54:42.112240   56230 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937_empty.pem, impossibly tiny 0 bytes
	I1219 03:54:42.112252   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:54:42.112280   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem (1082 bytes)
	I1219 03:54:42.112309   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:54:42.112361   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem (1679 bytes)
	I1219 03:54:42.112423   56230 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem (1708 bytes)
	I1219 03:54:42.113420   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:54:42.154291   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:54:42.194006   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:54:42.221732   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:54:42.253007   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:54:42.280935   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1219 03:54:42.315083   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:54:42.342426   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/default-k8s-diff-port-168174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:54:42.371444   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:54:42.402350   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem --> /usr/share/ca-certificates/8937.pem (1338 bytes)
	I1219 03:54:42.430533   56230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /usr/share/ca-certificates/89372.pem (1708 bytes)
	I1219 03:54:42.462798   56230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:54:42.483977   56230 ssh_runner.go:195] Run: openssl version
	I1219 03:54:42.490839   56230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.503565   56230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89372.pem /etc/ssl/certs/89372.pem
	I1219 03:54:42.514852   56230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.520693   56230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:46 /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.520739   56230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89372.pem
	I1219 03:54:42.528108   56230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:54:42.539720   56230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89372.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:54:42.550915   56230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.561679   56230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:54:42.572526   56230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.577725   56230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.577781   56230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:42.584786   56230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:54:42.596115   56230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:54:42.607332   56230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.618682   56230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8937.pem /etc/ssl/certs/8937.pem
	I1219 03:54:42.630292   56230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.635409   56230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:46 /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.635452   56230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8937.pem
	I1219 03:54:42.642710   56230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:54:42.654104   56230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8937.pem /etc/ssl/certs/51391683.0
	I1219 03:54:42.666207   56230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:54:42.671385   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:54:42.678373   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:54:42.685534   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:54:42.692140   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:54:42.698549   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:54:42.705279   56230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:54:42.712285   56230 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-168174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.3 ClusterName:default-k8s-diff-port-168174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:42.712383   56230 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:54:42.712433   56230 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:54:42.745951   56230 cri.go:92] found id: ""
	I1219 03:54:42.746000   56230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:54:42.757185   56230 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:54:42.757201   56230 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:54:42.757240   56230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:54:42.768155   56230 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:54:42.769156   56230 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-168174" does not appear in /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:42.769826   56230 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-5010/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-168174" cluster setting kubeconfig missing "default-k8s-diff-port-168174" context setting]
	I1219 03:54:42.770666   56230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:42.772207   56230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:54:42.782776   56230 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.50.68
	I1219 03:54:42.782799   56230 kubeadm.go:1161] stopping kube-system containers ...
	I1219 03:54:42.782811   56230 cri.go:57] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1219 03:54:42.782853   56230 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:54:42.827373   56230 cri.go:92] found id: ""
	I1219 03:54:42.827451   56230 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1219 03:54:42.855644   56230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:54:42.867640   56230 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:54:42.867664   56230 kubeadm.go:158] found existing configuration files:
	
	I1219 03:54:42.867713   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1219 03:54:42.879242   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:54:42.879345   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:54:42.890737   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1219 03:54:42.900979   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:54:42.901033   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:54:42.911989   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1219 03:54:42.922081   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:54:42.922121   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:54:42.933197   56230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1219 03:54:42.943650   56230 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:54:42.943706   56230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:54:42.954819   56230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:54:42.965503   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:43.022499   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:41.533216   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.031785   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:42.531762   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.032044   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.531965   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.032348   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.532701   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.032707   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.531729   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:46.032031   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:43.502226   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.002160   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.502401   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.002719   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:45.502332   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:46.001536   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:46.501990   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.002547   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.501403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:48.002631   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:44.652743   56230 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.630210852s)
	I1219 03:54:44.652817   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:44.912221   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:44.996004   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:45.067644   56230 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:54:45.067725   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:45.568080   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:46.068722   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:46.568114   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:47.068013   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:47.117129   56230 api_server.go:72] duration metric: took 2.049494189s to wait for apiserver process to appear ...
	I1219 03:54:47.117153   56230 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:47.117174   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:47.117680   56230 api_server.go:269] stopped: https://192.168.50.68:8444/healthz: Get "https://192.168.50.68:8444/healthz": dial tcp 192.168.50.68:8444: connect: connection refused
	I1219 03:54:47.617323   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:46.534635   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.031616   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:47.531182   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:48.032359   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:48.532986   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.031214   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.532385   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.032130   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.532478   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.031638   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.988621   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:49.988647   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:49.988661   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:50.015383   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:54:50.015404   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:54:50.117699   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:50.129872   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:50.129895   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:50.617488   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:50.622220   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:50.622255   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:51.117929   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:51.126710   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:54:51.126741   56230 api_server.go:103] status: https://192.168.50.68:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:54:51.617345   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:51.622349   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 200:
	ok
	I1219 03:54:51.628913   56230 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:51.628947   56230 api_server.go:131] duration metric: took 4.511785965s to wait for apiserver health ...
	I1219 03:54:51.628957   56230 cni.go:84] Creating CNI manager for ""
	I1219 03:54:51.628965   56230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 03:54:51.630494   56230 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:54:51.631426   56230 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:54:51.647385   56230 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:54:51.669320   56230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:51.675232   56230 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:51.675273   56230 system_pods.go:61] "coredns-66bc5c9577-dnfcc" [b8ee66a7-b129-4499-aad8-a988ecea241c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:51.675288   56230 system_pods.go:61] "etcd-default-k8s-diff-port-168174" [af7e1768-95b9-431e-877a-63138a91dcdc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:51.675298   56230 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-168174" [a59f79b0-b99b-4092-8b4a-773ff0a04569] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:51.675318   56230 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-168174" [405651c9-cdc7-414f-a512-f11b7b580eb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:51.675328   56230 system_pods.go:61] "kube-proxy-zs4wg" [2212782c-32ab-4355-8dda-9117953b0223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:51.675338   56230 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-168174" [a524e706-e546-4aa4-848c-f52eb802ffb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:51.675347   56230 system_pods.go:61] "metrics-server-746fcd58dc-xjkbx" [c6e2f2b2-7b94-4ff2-85ba-e79d72b30655] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:51.675358   56230 system_pods.go:61] "storage-provisioner" [ec1d1888-a950-48d5-9b73-440e7556818b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:54:51.675366   56230 system_pods.go:74] duration metric: took 6.023523ms to wait for pod list to return data ...
	I1219 03:54:51.675387   56230 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:51.680456   56230 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:51.680483   56230 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:51.680500   56230 node_conditions.go:105] duration metric: took 5.106096ms to run NodePressure ...
	I1219 03:54:51.680558   56230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:54:51.941503   56230 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1219 03:54:51.945528   56230 kubeadm.go:744] kubelet initialised
	I1219 03:54:51.945566   56230 kubeadm.go:745] duration metric: took 4.028139ms waiting for restarted kubelet to initialise ...
	I1219 03:54:51.945597   56230 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:54:51.967660   56230 ops.go:34] apiserver oom_adj: -16
	I1219 03:54:51.967680   56230 kubeadm.go:602] duration metric: took 9.210474475s to restartPrimaryControlPlane
	I1219 03:54:51.967689   56230 kubeadm.go:403] duration metric: took 9.255411647s to StartCluster
	I1219 03:54:51.967705   56230 settings.go:142] acquiring lock: {Name:mkb131693a40b9aa50c302192272518c3561c861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:51.967787   56230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:54:51.970216   56230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:51.970558   56230 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.68 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:54:51.970693   56230 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:54:51.970789   56230 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970812   56230 config.go:182] Loaded profile config "default-k8s-diff-port-168174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:54:51.970826   56230 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-168174"
	I1219 03:54:51.970825   56230 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970846   56230 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970858   56230 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-168174"
	I1219 03:54:51.970858   56230 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-168174"
	I1219 03:54:51.970884   56230 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-168174"
	W1219 03:54:51.970893   56230 addons.go:248] addon dashboard should already be in state true
	I1219 03:54:51.970919   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	W1219 03:54:51.970836   56230 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:54:51.970978   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	I1219 03:54:51.970861   56230 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-168174"
	W1219 03:54:51.971035   56230 addons.go:248] addon metrics-server should already be in state true
	I1219 03:54:51.971057   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	I1219 03:54:51.971960   56230 out.go:179] * Verifying Kubernetes components...
	I1219 03:54:51.973008   56230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:51.974650   56230 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:51.974726   56230 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:54:51.974952   56230 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:54:51.975006   56230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:54:48.502712   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.001711   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:49.501592   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.001601   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:50.501313   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.002296   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.502360   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.002651   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.503108   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.002165   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:51.975433   56230 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-168174"
	W1219 03:54:51.975454   56230 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:54:51.975493   56230 host.go:66] Checking if "default-k8s-diff-port-168174" exists ...
	I1219 03:54:51.975992   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:54:51.976010   56230 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:54:51.976037   56230 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:51.976049   56230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:54:51.978029   56230 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:51.978047   56230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:54:51.979030   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.979580   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.979617   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.979992   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:51.980624   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.980627   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981054   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.981088   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981091   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.981123   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981299   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:51.981430   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.981442   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:51.981908   56230 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:97:a2", ip: ""} in network mk-default-k8s-diff-port-168174: {Iface:virbr2 ExpiryTime:2025-12-19 04:54:31 +0000 UTC Type:0 Mac:52:54:00:d9:97:a2 Iaid: IPaddr:192.168.50.68 Prefix:24 Hostname:default-k8s-diff-port-168174 Clientid:01:52:54:00:d9:97:a2}
	I1219 03:54:51.981931   56230 main.go:144] libmachine: domain default-k8s-diff-port-168174 has defined IP address 192.168.50.68 and MAC address 52:54:00:d9:97:a2 in network mk-default-k8s-diff-port-168174
	I1219 03:54:51.982118   56230 sshutil.go:53] new ssh client: &{IP:192.168.50.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/default-k8s-diff-port-168174/id_rsa Username:docker}
	I1219 03:54:52.329267   56230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:52.362110   56230 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-168174" to be "Ready" ...
	I1219 03:54:52.365712   56230 node_ready.go:49] node "default-k8s-diff-port-168174" is "Ready"
	I1219 03:54:52.365740   56230 node_ready.go:38] duration metric: took 3.595186ms for node "default-k8s-diff-port-168174" to be "Ready" ...
	I1219 03:54:52.365758   56230 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:54:52.365821   56230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:54:52.390728   56230 api_server.go:72] duration metric: took 420.108978ms to wait for apiserver process to appear ...
	I1219 03:54:52.390759   56230 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:54:52.390781   56230 api_server.go:253] Checking apiserver healthz at https://192.168.50.68:8444/healthz ...
	I1219 03:54:52.397481   56230 api_server.go:279] https://192.168.50.68:8444/healthz returned 200:
	ok
	I1219 03:54:52.398595   56230 api_server.go:141] control plane version: v1.34.3
	I1219 03:54:52.398619   56230 api_server.go:131] duration metric: took 7.851716ms to wait for apiserver health ...
	I1219 03:54:52.398634   56230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:54:52.403556   56230 system_pods.go:59] 8 kube-system pods found
	I1219 03:54:52.403621   56230 system_pods.go:61] "coredns-66bc5c9577-dnfcc" [b8ee66a7-b129-4499-aad8-a988ecea241c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:52.403638   56230 system_pods.go:61] "etcd-default-k8s-diff-port-168174" [af7e1768-95b9-431e-877a-63138a91dcdc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:52.403653   56230 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-168174" [a59f79b0-b99b-4092-8b4a-773ff0a04569] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:52.403664   56230 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-168174" [405651c9-cdc7-414f-a512-f11b7b580eb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:52.403676   56230 system_pods.go:61] "kube-proxy-zs4wg" [2212782c-32ab-4355-8dda-9117953b0223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:52.403690   56230 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-168174" [a524e706-e546-4aa4-848c-f52eb802ffb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:52.403705   56230 system_pods.go:61] "metrics-server-746fcd58dc-xjkbx" [c6e2f2b2-7b94-4ff2-85ba-e79d72b30655] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:52.403714   56230 system_pods.go:61] "storage-provisioner" [ec1d1888-a950-48d5-9b73-440e7556818b] Running
	I1219 03:54:52.403725   56230 system_pods.go:74] duration metric: took 5.080532ms to wait for pod list to return data ...
	I1219 03:54:52.403737   56230 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:54:52.406964   56230 default_sa.go:45] found service account: "default"
	I1219 03:54:52.406989   56230 default_sa.go:55] duration metric: took 3.241415ms for default service account to be created ...
	I1219 03:54:52.406999   56230 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:54:52.412763   56230 system_pods.go:86] 8 kube-system pods found
	I1219 03:54:52.412787   56230 system_pods.go:89] "coredns-66bc5c9577-dnfcc" [b8ee66a7-b129-4499-aad8-a988ecea241c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:54:52.412797   56230 system_pods.go:89] "etcd-default-k8s-diff-port-168174" [af7e1768-95b9-431e-877a-63138a91dcdc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:54:52.412804   56230 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-168174" [a59f79b0-b99b-4092-8b4a-773ff0a04569] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:54:52.412810   56230 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-168174" [405651c9-cdc7-414f-a512-f11b7b580eb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:54:52.412816   56230 system_pods.go:89] "kube-proxy-zs4wg" [2212782c-32ab-4355-8dda-9117953b0223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:54:52.412821   56230 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-168174" [a524e706-e546-4aa4-848c-f52eb802ffb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:54:52.412826   56230 system_pods.go:89] "metrics-server-746fcd58dc-xjkbx" [c6e2f2b2-7b94-4ff2-85ba-e79d72b30655] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:54:52.412830   56230 system_pods.go:89] "storage-provisioner" [ec1d1888-a950-48d5-9b73-440e7556818b] Running
	I1219 03:54:52.412837   56230 system_pods.go:126] duration metric: took 5.832618ms to wait for k8s-apps to be running ...
	I1219 03:54:52.412847   56230 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:54:52.412890   56230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:54:52.437131   56230 system_svc.go:56] duration metric: took 24.267658ms WaitForService to wait for kubelet
	I1219 03:54:52.437166   56230 kubeadm.go:587] duration metric: took 466.551246ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:54:52.437188   56230 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:54:52.440753   56230 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:54:52.440776   56230 node_conditions.go:123] node cpu capacity is 2
	I1219 03:54:52.440789   56230 node_conditions.go:105] duration metric: took 3.595658ms to run NodePressure ...
	I1219 03:54:52.440804   56230 start.go:242] waiting for startup goroutines ...
	I1219 03:54:52.571235   56230 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:54:52.579720   56230 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:54:52.588696   56230 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:54:52.607999   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:54:52.623079   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:54:52.623103   56230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:54:52.632201   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:54:52.689775   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:54:52.689802   56230 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:54:52.755241   56230 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:52.755280   56230 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:54:52.860818   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:54:51.531836   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.032945   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:52.532771   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.031681   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.532510   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.032369   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.532915   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.031905   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.531152   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.032011   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:53.502165   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.002813   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.501582   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.002986   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:55.501711   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.000984   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.502399   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.002200   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.502369   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.002000   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:54.655285   56230 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (2.066552827s)
	I1219 03:54:54.655390   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:54:54.655405   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.047371795s)
	I1219 03:54:54.655528   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.023298979s)
	I1219 03:54:54.655657   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.794802456s)
	I1219 03:54:54.655684   56230 addons.go:500] Verifying addon metrics-server=true in "default-k8s-diff-port-168174"
	I1219 03:54:57.969258   56230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.313828747s)
	I1219 03:54:57.969346   56230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:54:58.498709   56230 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-168174"
	I1219 03:54:58.501734   56230 out.go:179] * Verifying dashboard addon...
	I1219 03:54:58.504348   56230 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:54:58.510036   56230 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:54:58.510056   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.010436   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:56.532022   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.031585   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:57.531985   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.032925   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.533378   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.032504   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.530653   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.031045   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.531549   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.030879   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:58.502926   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.001807   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.501672   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.001239   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.501991   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.001622   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.501692   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.002517   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.502331   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.001757   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:54:59.508121   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.008244   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:00.507884   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.008223   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.507861   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.012677   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.507898   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.008121   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.508367   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.008842   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:01.531235   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.031845   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:02.531542   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.030822   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.532087   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.032140   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.532095   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.032183   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.532546   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.031699   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:03.501262   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.001782   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.501640   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.002705   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.501849   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.001647   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.502225   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.002170   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.502397   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.003244   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:04.508346   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.007493   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:05.507987   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.007825   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.507635   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.008062   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.507047   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.008442   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.510089   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.008180   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:06.536198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.032221   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:07.532227   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.032198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.531813   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.031889   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.531666   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.031122   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.532149   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.031983   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:08.502642   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.001743   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.502066   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.002044   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.502017   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.002386   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.502467   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.002107   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.502677   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.002161   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:09.507112   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.008461   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:10.508312   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.008611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.508384   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.008280   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.508541   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.008623   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.508431   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.009349   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:11.532619   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.031875   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:12.532589   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.031244   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.531877   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.031690   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.531758   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.032196   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.531803   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.030943   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:13.502018   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.002044   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.502072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.002330   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.502958   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.001850   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.501605   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.001853   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.501780   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.001784   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:14.508124   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.008333   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:15.508545   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.008130   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.507985   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.007539   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.508141   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.007919   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.507523   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.008842   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:16.532418   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.032219   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:17.532547   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.032233   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.532551   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.033166   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.531532   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.031971   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.532050   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.032787   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:18.501956   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.002220   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.502226   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.003355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.501800   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.001708   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.501127   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.003195   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.502775   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.002072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:19.507432   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.008746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:20.508268   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.008770   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.508749   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.009746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.509595   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.008351   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.508700   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.009427   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:21.532398   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.033297   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:22.531966   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.032953   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.532813   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.032632   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.531743   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.031446   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.531999   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.032229   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:23.502200   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.002490   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.502281   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.002814   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.502200   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.001250   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.502303   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.003201   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.501897   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.001740   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:24.508429   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.008390   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:25.507941   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.007624   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.508269   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.008250   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.508598   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.008371   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.508380   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.008493   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:26.531979   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.031753   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:27.531087   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.031427   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.533856   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.032558   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.532153   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.031923   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.531960   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.032601   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:28.501692   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.001922   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.501325   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.003828   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.502896   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.002912   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.501760   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.001551   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.503707   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.002109   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:29.508499   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.009212   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:30.508512   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.008223   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.508681   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.008636   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.508533   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.008248   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.507749   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.010179   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:31.531439   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.033650   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:32.532006   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.033362   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.532163   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.032485   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.532885   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.032795   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.531588   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.032179   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:33.502338   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.001955   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.502091   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.002307   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.502793   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.000849   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.501606   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.001431   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.502037   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.001873   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:34.507899   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.009735   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:35.508708   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.008927   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.508321   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.008289   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.507348   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.009029   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.507232   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.007368   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:36.532210   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.032304   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:37.531955   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.031259   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.532301   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.032157   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.531594   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.032495   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.532008   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.032133   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:38.501770   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.002435   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.502300   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.002307   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.501724   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.002293   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.503636   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.001410   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.504029   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.001789   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:39.508096   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.009356   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:40.507852   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.007460   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.508444   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.008364   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.507697   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.008880   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.508861   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.008835   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:41.532091   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.032010   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:42.531306   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.031852   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.531186   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.032131   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.531205   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.032529   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.532677   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.033016   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:43.502472   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.001435   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.502091   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.001734   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.501352   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.002340   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.502315   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.002534   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.501024   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.001249   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:44.507519   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.008950   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:45.507774   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.009594   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.507884   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.007928   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.507777   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.009168   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.507455   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.009287   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:46.532161   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.032066   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:47.531975   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.031583   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.532654   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.033122   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.531676   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.031185   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.532468   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.032385   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:48.501786   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.002482   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.502524   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.001342   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.502134   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.003763   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.502136   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.001766   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.502345   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.001599   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:49.508543   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.009242   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:50.508054   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.009144   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.508104   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.008088   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.507250   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.009098   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.507611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.010519   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:51.531780   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.031001   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:52.532489   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.032242   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.536320   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.033455   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.532129   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.031767   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.531204   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.031365   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:53.503558   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.001726   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.501144   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.001613   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.502734   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.002274   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.501831   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.001426   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.503884   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.001283   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:54.508611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.009353   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:55.507657   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.007664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.507861   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.007544   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.507689   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.008330   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.508469   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.009715   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:56.532345   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.032801   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:57.531689   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.032877   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.532081   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.032107   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.531920   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.031409   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.532046   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.032408   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:58.501828   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.001518   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.502563   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.002507   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.501333   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.002564   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.502379   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.001485   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.501810   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.001402   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:55:59.508191   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.008241   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:00.508833   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.008453   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.508563   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.008613   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.509524   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.008844   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.507854   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.007055   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:01.532493   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.033676   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:02.532206   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.031784   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.532118   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.032496   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.532286   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.032724   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.533137   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.031294   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:03.502666   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.001524   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.501177   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.001644   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.503328   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.002433   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.502361   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.002735   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.501301   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.001765   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:04.508242   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.008660   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:05.507962   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.008796   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.508749   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.009651   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.508080   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.008550   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.509473   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.008511   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:06.533457   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.032379   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:07.532473   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.032865   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.531464   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.032764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.531236   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.032148   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.532471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.032216   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:08.502684   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.002188   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.503237   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.001912   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.501622   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.001891   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.502012   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.001650   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.502856   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.001921   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:09.507699   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.008027   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:10.508703   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.008209   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.508178   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.008432   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.509550   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.008319   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.507828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.007561   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:11.531544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.032519   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:12.532198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.032915   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.531514   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.032723   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.531505   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.033182   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.531615   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.032916   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:13.501854   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.001080   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.503363   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.002618   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.502840   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.000881   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.501714   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.002610   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.502008   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.001866   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:14.508549   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.007753   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:15.508465   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.008319   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.508222   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.007904   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.508163   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.008464   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.508145   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.007728   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:16.531667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.033191   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:17.531547   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.032788   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.532591   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.033086   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.531237   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.032101   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.532279   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.032764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:18.501636   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.001241   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.501915   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.001761   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.501797   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.001851   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.502732   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.001114   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.502538   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.001630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:19.508503   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.009432   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:20.508442   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.008564   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.508754   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.008668   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.508947   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.007984   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.507426   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.008776   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:21.531412   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.031826   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:22.531169   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.032838   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.531368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.033085   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.531343   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.032505   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.532373   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.032078   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:23.501962   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.001801   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.502380   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.001940   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.501661   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.001355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.501727   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.002704   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.502515   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.001261   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:24.508926   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.008697   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:25.508155   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.008403   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.509752   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.009152   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.507692   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.008539   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.508833   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.008403   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:26.532212   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.031709   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:27.531512   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.032880   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.531683   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.032225   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.532187   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.032017   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.530954   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.031969   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:28.502513   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.001736   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.502118   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.001728   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.502479   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.002783   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.502414   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.002781   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.501809   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.002598   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:29.507936   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.007414   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:30.508924   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.007756   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.509607   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.008188   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.508901   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.009164   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.507936   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.007349   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:31.532294   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.033050   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:32.532115   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.031971   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.531279   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.032256   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.531863   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.031763   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.531164   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.031290   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:33.502730   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.001984   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.502287   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.002224   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.502985   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.000948   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.501630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.001169   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.502075   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.002834   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:34.508225   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.007739   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:35.508108   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.008481   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.508746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.008298   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.507944   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.008428   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.507905   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.007728   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:36.531448   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.032595   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:37.532096   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.031394   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.532851   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.032534   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.532843   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.031994   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.533667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.033061   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:38.501275   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.003274   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.502492   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.002263   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.501814   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.002188   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.502456   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.002449   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.503413   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.002514   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:39.508385   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.008219   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:40.509237   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.007998   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.507734   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.008610   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.509142   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.008330   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.507609   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.009119   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:41.531626   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.032337   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:42.532298   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.032378   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.531679   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.032529   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.532155   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.031828   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.531299   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.031239   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:43.502830   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.001989   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.502331   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.002798   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.502197   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.001852   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.501952   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.001753   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.501421   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.002328   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:44.508315   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.008862   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:45.508067   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.008030   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.507755   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.008786   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.507672   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.009365   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.509016   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.007277   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:46.531667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.031610   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:47.532096   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.032319   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.532500   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.031773   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.531561   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.032598   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.531974   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.031362   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:48.502152   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.001130   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.502479   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.002251   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.501762   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.000846   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.502253   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.002765   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.502160   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.001409   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:49.508190   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.008459   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:50.508829   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.007664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.509469   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.009747   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.509579   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.009682   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.508738   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.008970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:51.532197   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.031685   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:52.532322   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.031885   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.531778   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.031643   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.531467   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.031815   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.531155   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.031720   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:53.503475   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.001639   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.501436   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.002712   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.501897   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.001181   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.501530   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.000985   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.501730   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.001514   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:54.508263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.007505   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:55.508726   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.008230   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.508664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.008997   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.507428   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.008379   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.508549   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.007995   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:56.531536   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.032617   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:57.535990   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.031685   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.533156   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.031587   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.532830   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.031367   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.532930   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.031943   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:58.502386   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.002215   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.503037   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.001428   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.502319   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.002311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.502140   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.002283   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.502150   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.002240   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:56:59.507946   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.008416   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:00.508631   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.008561   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.508912   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.008658   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.509386   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.008665   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.509011   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.008072   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:01.533032   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.032143   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:02.531588   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.032371   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.533496   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.033023   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.531133   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.032394   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.532243   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.031898   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:03.502405   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.001877   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.505174   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.002029   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.502125   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.001824   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.501660   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.002257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.502497   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.002911   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:04.509042   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.008740   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:05.507989   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.007873   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.508285   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.007091   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.508238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.508597   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.009516   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:06.531381   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.032162   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:07.531649   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.032718   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.532156   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.033496   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.533930   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.032368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.532625   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.032661   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:08.501952   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.001604   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.501905   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.002311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.501777   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.001546   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.502154   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.002455   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.503055   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.001472   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:09.508050   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.008080   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:10.507970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.007844   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.508056   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.007765   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.508456   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.007981   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.508855   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.008604   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:11.532081   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.032702   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:12.531078   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.031663   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.531993   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.033077   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.531457   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.032927   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.531699   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.031008   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:13.502839   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.001682   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.501484   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.003428   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.502649   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.002047   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.501936   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.001951   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.502955   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.002709   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:14.509628   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.008629   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:15.509037   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.008098   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.508408   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.009392   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.507832   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.008540   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.509468   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.008988   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:16.532091   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.032487   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:17.532767   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.032348   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.533265   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.032832   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.533225   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.032480   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.531859   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.031535   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:18.502389   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.002611   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.501620   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.002058   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.502778   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.002073   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.501287   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.001492   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.503034   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.002058   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:19.507218   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.008007   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:20.507903   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.008002   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.508538   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.009106   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.509031   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.008019   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.508250   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.009604   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:21.532463   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.032668   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:22.531757   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.031273   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.533278   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.032950   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.531375   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.032433   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.532764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.031941   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:23.501829   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.001397   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.502802   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.001851   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.503206   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.001481   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.502653   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.002180   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.501887   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.001927   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:24.509024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.007589   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:25.509073   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.008555   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.508449   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.008256   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.508501   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.009916   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.508490   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.008336   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:26.531904   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.031168   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:27.532025   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.032276   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.531968   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.032764   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.531973   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.031624   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.532201   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.032129   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:28.502278   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.001507   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.501338   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.002753   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.501620   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.001545   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.502545   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.001650   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.501704   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.001060   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:29.508006   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.007837   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:30.509358   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.508132   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.007983   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.508981   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.007803   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.507769   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.009970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:31.532685   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.033023   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:32.531348   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.031614   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.533370   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.032795   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.531237   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.032033   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.532778   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.031294   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:33.502337   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.002204   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.501845   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.002344   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.503195   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.002894   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.501979   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.002008   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.501981   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.001740   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:34.507806   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.009357   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:35.508695   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.008959   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.509725   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.008245   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.507606   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.008218   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.507870   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.007087   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:36.532257   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.032024   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:37.532220   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.031647   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.532123   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.032889   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.532444   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.032621   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.532943   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.031712   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:38.501355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.002083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.501469   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.002554   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.501408   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.002216   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.502543   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.001754   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.501454   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.002870   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:39.507033   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.007862   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:40.509097   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.008460   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.509108   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.007794   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.508514   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.009784   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.508154   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.008565   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:41.531552   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.032724   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:42.531968   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.031728   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.531786   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.032471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.531802   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.032162   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.532320   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.031297   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:43.503203   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.002682   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.502066   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.001775   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.501403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.002298   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.502073   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.001483   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.501639   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.002266   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:44.508296   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.008881   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:45.508078   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.007871   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.508564   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.008609   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.507625   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.008815   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.507996   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.009033   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:46.531916   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.032003   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:47.535669   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.032260   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.533368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.032732   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.532734   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.031076   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.531706   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.031411   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:48.502350   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.002202   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.502113   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.002611   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.501323   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.002251   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.501726   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.003470   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.502490   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.002224   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:49.507379   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.007665   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:50.508534   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.009007   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.509344   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.007746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.508532   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.009346   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.507367   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.009828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:51.532471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.032182   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:52.531696   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.031891   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.531523   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.032527   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.531544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.033055   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.532251   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.032012   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:53.501701   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.001815   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.501403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.001721   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.502408   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.006350   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.502718   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.000975   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.502050   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.001993   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:54.507665   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.010022   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:55.507891   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.017962   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.509387   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.009499   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.508592   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.007712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.509159   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.008024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:56.532417   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.032702   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:57.531960   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.032030   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.532438   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.032562   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.532541   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.031906   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.533707   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.031481   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:58.501333   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.002706   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.501390   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.003083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.501477   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.003243   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.502051   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.002119   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.502250   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.001892   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:57:59.508467   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.007934   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:00.508461   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.009263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.508676   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.007597   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.508263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.008661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.508545   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.008653   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:01.533009   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.032493   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:02.532027   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.032116   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.531261   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.034181   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.531702   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.032409   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.533808   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.031246   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:03.501444   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.002084   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.501717   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.002397   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.502329   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.002161   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.501724   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.001096   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.501676   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.001373   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:04.508793   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.009558   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:05.508307   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.008745   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.508478   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.009365   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.508285   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.008394   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.507659   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.008883   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:06.531671   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.032663   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:07.532654   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.032443   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.531860   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.031786   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.532418   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.032031   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.531026   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.031184   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:08.502311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.001761   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.501921   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.001779   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.502884   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.000815   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.502204   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.002552   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.502487   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.002005   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:09.509248   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.008315   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:10.507712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.009764   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.509368   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.007428   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.508548   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.008954   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.508930   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.008936   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:11.532311   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.032156   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:12.531768   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.031259   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.532112   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.032440   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.533083   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.031470   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.533077   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.031626   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:13.503116   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.002138   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.502257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.002721   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.501511   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.002183   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.502306   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.002714   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.501224   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.003247   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:14.508715   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.008752   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:15.509114   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.007677   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.508804   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.009618   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.508120   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.007885   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.507480   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.008978   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:16.532146   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.031615   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:17.532552   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.031381   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.532386   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.032461   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.533200   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.032375   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.531718   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.030828   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:18.502028   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.001762   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.501418   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.002914   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.501869   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.001896   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.501339   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.002565   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.502667   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.001134   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:19.507828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.008203   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:20.508364   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.008929   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.508117   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.007662   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.507899   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.008710   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.507212   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.008474   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:21.532845   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.032290   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:22.532646   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.031957   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.531378   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.032264   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.531928   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.031473   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.532386   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.032382   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:23.502231   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.002752   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.500970   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.000924   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.501030   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.002189   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.502781   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.002623   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.501117   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.001792   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:24.508109   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.008892   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:25.508228   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.007643   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.508278   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.009399   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.508216   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.008474   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.507952   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.008596   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:26.532465   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.032800   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:27.531643   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.031616   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.533745   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.031460   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.532616   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.032471   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.532228   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.031437   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:28.501355   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.001764   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.501298   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.003052   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.502950   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.001770   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.501738   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.003204   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.503749   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.000964   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:29.508615   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.009187   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:30.507594   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.009258   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.508166   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.008876   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.508828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.009323   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.507635   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.008857   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:31.532499   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.033303   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:32.532140   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.031451   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.532012   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.031739   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.531969   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.031026   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.531884   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.032850   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:33.501466   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.002962   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.501319   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.002095   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.501455   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.002904   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.502079   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.002351   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.502139   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.002366   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:34.507536   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.009458   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:35.508342   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.008114   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.507689   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.008772   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.508175   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.008253   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.508521   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.010486   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:36.531019   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.032116   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:37.531731   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.031746   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.531610   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.032124   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.531488   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.032358   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.532561   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.032192   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:38.502021   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.001431   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.502831   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.001874   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.501461   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.002135   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.502101   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.002403   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.501826   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.001388   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:39.508693   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.008934   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:40.507098   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.007956   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.508938   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.007971   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.508613   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.009088   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.507422   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.008448   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:41.531909   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.031872   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:42.532556   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.032306   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.532154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.032667   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.531742   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.032077   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.531946   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.033451   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:43.502067   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.002320   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.501957   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.002135   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.501241   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.002784   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.502988   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.004826   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.502313   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.002638   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:44.507745   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.009163   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:45.508092   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.008607   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.508116   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.008371   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.507434   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.008847   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.507621   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.008655   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:46.532124   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.032109   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:47.531627   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.031388   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.532769   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.031521   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.531483   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.032091   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.532187   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.031753   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:48.502460   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.002540   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.501945   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.002223   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.501542   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.001659   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.501286   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.002482   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.502722   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.001266   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:49.507988   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.009496   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:50.509180   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.008698   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.508772   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.008904   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.508816   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.009066   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.507818   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.008395   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:51.531785   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.031722   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:52.531144   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.031857   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.531058   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.032168   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.532777   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.032608   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.531658   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.032994   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:53.501701   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.002308   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.502069   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.002072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.501731   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.002148   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.503078   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.003123   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.501899   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.002103   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:54.507702   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.009409   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:55.508752   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.009166   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.508117   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.009342   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.508229   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.007650   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.514151   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.008149   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:56.531183   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.030952   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:57.531064   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.032714   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.532410   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.031666   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.531454   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.032368   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.532161   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.031779   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:58.502176   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.001419   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.502257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.002485   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.501904   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.001645   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.501262   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.002789   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.502720   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.001933   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:58:59.507580   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.008671   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:00.508761   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.009888   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.508049   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.009018   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.508299   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.009024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.507584   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.008065   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:01.530966   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.031880   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:02.531265   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.031652   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.532860   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.031804   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.532296   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.031908   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.531566   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.031333   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:03.501384   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.002328   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.501432   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.002402   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.502445   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.004922   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.501916   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.002619   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.501038   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.001821   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:04.507960   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.008882   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:05.508735   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.009370   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.508266   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.009541   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.508167   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.008293   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.509228   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.008514   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:06.531404   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.032313   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:07.532704   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.033420   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.532159   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.032178   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.531613   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.035741   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.532501   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.033104   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:08.502173   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.002026   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.501239   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.001300   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.503227   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.001826   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.501434   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.003235   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.502432   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.002356   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:09.507884   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.008334   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:10.508168   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.008274   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.508025   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.008228   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.507713   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.008537   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.508684   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.009919   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:11.532599   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.035420   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:12.531992   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.031944   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.531194   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.032224   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.531672   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.031544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.531967   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.031448   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:13.501782   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.001444   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.503454   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.002767   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.501906   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.001726   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.502123   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.005942   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.501817   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.001941   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:14.507853   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.008476   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:15.508667   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.008722   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.509046   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.008778   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.508906   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.008492   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.508647   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.007815   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:16.532200   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.031966   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:17.531791   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.033536   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.532652   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.032201   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.531928   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.033359   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.533670   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.032187   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:18.501934   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.002902   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.501267   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.002601   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.501489   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.002545   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.501360   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.002042   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.503032   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.001085   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:19.507611   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.008196   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:20.509732   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.009055   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.508388   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.007995   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.507537   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.008854   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.508167   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.008019   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:21.531647   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.034444   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:22.532628   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.032333   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.531736   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.032056   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.531916   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.031464   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.532198   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.032089   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:23.501603   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.001216   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.502879   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.001292   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.501341   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.002410   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.502804   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.002021   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.502279   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.002340   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:24.507566   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.008774   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:25.509162   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.009209   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.507648   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.009824   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.508631   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.009013   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.507653   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.008464   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:26.531694   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.032157   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:27.532431   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.031890   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.533074   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.032602   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.531803   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.032839   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.531064   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.033390   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:28.502372   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.001862   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.502294   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.001477   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.503184   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.002218   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.502643   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.001877   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.503311   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.002436   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:29.508296   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.008304   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:30.508381   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.008490   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.508829   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.007834   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.508400   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.008794   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.509376   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.008146   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:31.531920   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.033659   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:32.532892   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.031391   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.532537   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.033029   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.530956   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.031585   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.533148   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.031532   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:33.502341   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.002087   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.501994   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.001651   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.501441   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.002140   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.501765   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.001241   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.502479   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.002437   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:34.508235   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.008483   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:35.508534   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.008744   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.508702   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.008924   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.507985   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.007421   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.507911   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.008590   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:36.532045   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.031418   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:37.532867   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.031333   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.532360   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.032704   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.531535   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.033276   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.532090   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.032674   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:38.503195   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.001544   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.501650   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.001446   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.503141   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.001293   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.501933   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.001485   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.501393   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.001793   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:39.508830   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.008286   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:40.508322   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.008679   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.509263   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.008010   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.507661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.008954   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.508712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.008648   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:41.531115   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.033681   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:42.532204   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.031525   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.532706   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.031154   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.531400   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.032686   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.531016   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.031694   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:43.500799   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.001437   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.503087   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.001262   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.502070   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.001597   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.501748   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.000952   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.503068   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.002924   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:44.508721   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.009360   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:45.507561   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.008024   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.509438   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.008003   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.509182   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.007694   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.509204   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.008075   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:46.531475   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.032236   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:47.531623   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.032627   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.531328   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.032263   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.531544   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.031759   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.531968   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.031169   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:48.502523   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.001089   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.502166   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.002297   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.501900   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.002177   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.501990   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.001761   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.503411   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.001888   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:49.508661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.008645   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:50.509700   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.509485   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.008711   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.508528   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.009157   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.508329   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.008371   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:51.532470   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.033506   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:52.532332   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.032618   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.532408   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.032700   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.532680   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.030763   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.531486   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.032694   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:53.501870   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.001255   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.502146   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.001892   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.502373   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.001923   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.502476   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.001982   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.502446   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.003222   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:54.508346   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.008513   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:55.509470   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.009002   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.508067   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.007514   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.508798   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.008828   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.508496   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.008238   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:56.531146   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.031591   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:57.532375   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.033082   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:58.531649   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.031902   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.532588   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.032880   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.532136   55595 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.028606   55595 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 04:00:01.028642   55595 kapi.go:107] duration metric: took 6m0.000598506s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 04:00:01.028754   55595 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 04:00:01.030295   55595 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1219 04:00:01.031288   55595 addons.go:546] duration metric: took 6m6.695311639s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1219 04:00:01.031318   55595 start.go:247] waiting for cluster config update ...
	I1219 04:00:01.031329   55595 start.go:256] writing updated cluster config ...
	I1219 04:00:01.031596   55595 ssh_runner.go:195] Run: rm -f paused
	I1219 04:00:01.039401   55595 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:01.043907   55595 pod_ready.go:83] waiting for pod "coredns-7d764666f9-s7729" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.050711   55595 pod_ready.go:94] pod "coredns-7d764666f9-s7729" is "Ready"
	I1219 04:00:01.050733   55595 pod_ready.go:86] duration metric: took 6.803187ms for pod "coredns-7d764666f9-s7729" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.053765   55595 pod_ready.go:83] waiting for pod "etcd-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.058213   55595 pod_ready.go:94] pod "etcd-no-preload-298059" is "Ready"
	I1219 04:00:01.058234   55595 pod_ready.go:86] duration metric: took 4.447718ms for pod "etcd-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.060300   55595 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.065142   55595 pod_ready.go:94] pod "kube-apiserver-no-preload-298059" is "Ready"
	I1219 04:00:01.065166   55595 pod_ready.go:86] duration metric: took 4.840116ms for pod "kube-apiserver-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.067284   55595 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.445171   55595 pod_ready.go:94] pod "kube-controller-manager-no-preload-298059" is "Ready"
	I1219 04:00:01.445200   55595 pod_ready.go:86] duration metric: took 377.900542ms for pod "kube-controller-manager-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:01.645417   55595 pod_ready.go:83] waiting for pod "kube-proxy-mdfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.044330   55595 pod_ready.go:94] pod "kube-proxy-mdfxl" is "Ready"
	I1219 04:00:02.044377   55595 pod_ready.go:86] duration metric: took 398.907218ms for pod "kube-proxy-mdfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.245766   55595 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.645879   55595 pod_ready.go:94] pod "kube-scheduler-no-preload-298059" is "Ready"
	I1219 04:00:02.645937   55595 pod_ready.go:86] duration metric: took 400.143888ms for pod "kube-scheduler-no-preload-298059" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:02.645954   55595 pod_ready.go:40] duration metric: took 1.606522986s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:02.697158   55595 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 04:00:02.698980   55595 out.go:179] * Done! kubectl is now configured to use "no-preload-298059" cluster and "default" namespace by default
	I1219 03:59:58.502543   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.001139   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.501649   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.001415   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.502374   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.002272   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.502072   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.002694   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.501377   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.002499   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:59:59.508999   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.009465   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:00.508462   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.008805   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:01.509068   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.007682   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:02.508807   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.009533   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.509171   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.008344   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:03.501482   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.002080   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.502514   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.002257   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.502741   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.001565   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.502968   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.002364   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.502630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.001239   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:04.508168   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.007952   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:05.508714   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.008805   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:06.508239   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.009278   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:07.509811   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.008945   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.513267   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.008127   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:08.502641   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.002630   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.501272   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.001592   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.502177   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.002030   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.501972   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.001917   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.502061   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.001824   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:09.508106   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.007937   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:10.507970   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.008418   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:11.508614   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.007994   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:12.508452   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.008632   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.510343   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.008029   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:13.501559   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.000819   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.501592   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.002062   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.501962   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.001720   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.502083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.002024   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.501681   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.001502   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:14.507866   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.009254   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:15.508704   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.008650   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:16.508846   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.010798   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:17.507933   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.009073   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.508337   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.008331   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:18.502462   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.003975   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.501373   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.002075   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.502437   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.001953   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.501417   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.003083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.501515   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.001553   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:19.509712   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.008196   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:20.507361   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.008284   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:21.508302   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.007728   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:22.509259   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.008711   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.509664   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.008507   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:23.502079   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.001986   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.501922   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.001179   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.502972   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.002165   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.502809   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.001369   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.502083   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.002507   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:24.508264   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.008006   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:25.509488   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.008519   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:26.508978   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.008309   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:27.508775   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.009625   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.508731   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.009043   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:28.502787   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.001831   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.502430   55957 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:29.998860   55957 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 04:00:29.998886   55957 kapi.go:107] duration metric: took 6m0.000824832s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 04:00:29.998960   55957 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 04:00:30.000498   55957 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1219 04:00:30.001513   55957 addons.go:546] duration metric: took 6m7.141140342s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1219 04:00:30.001540   55957 start.go:247] waiting for cluster config update ...
	I1219 04:00:30.001550   55957 start.go:256] writing updated cluster config ...
	I1219 04:00:30.001800   55957 ssh_runner.go:195] Run: rm -f paused
	I1219 04:00:30.010656   55957 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:30.015390   55957 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9ptrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.020029   55957 pod_ready.go:94] pod "coredns-66bc5c9577-9ptrv" is "Ready"
	I1219 04:00:30.020051   55957 pod_ready.go:86] duration metric: took 4.638733ms for pod "coredns-66bc5c9577-9ptrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.022246   55957 pod_ready.go:83] waiting for pod "etcd-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.026208   55957 pod_ready.go:94] pod "etcd-embed-certs-244717" is "Ready"
	I1219 04:00:30.026224   55957 pod_ready.go:86] duration metric: took 3.954396ms for pod "etcd-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.028026   55957 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.033934   55957 pod_ready.go:94] pod "kube-apiserver-embed-certs-244717" is "Ready"
	I1219 04:00:30.033951   55957 pod_ready.go:86] duration metric: took 5.905842ms for pod "kube-apiserver-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.036019   55957 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.417680   55957 pod_ready.go:94] pod "kube-controller-manager-embed-certs-244717" is "Ready"
	I1219 04:00:30.417709   55957 pod_ready.go:86] duration metric: took 381.673199ms for pod "kube-controller-manager-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:30.616122   55957 pod_ready.go:83] waiting for pod "kube-proxy-p8gvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.015548   55957 pod_ready.go:94] pod "kube-proxy-p8gvm" is "Ready"
	I1219 04:00:31.015585   55957 pod_ready.go:86] duration metric: took 399.442531ms for pod "kube-proxy-p8gvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.216107   55957 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.615784   55957 pod_ready.go:94] pod "kube-scheduler-embed-certs-244717" is "Ready"
	I1219 04:00:31.615816   55957 pod_ready.go:86] duration metric: took 399.682179ms for pod "kube-scheduler-embed-certs-244717" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:31.615832   55957 pod_ready.go:40] duration metric: took 1.605153664s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:31.662639   55957 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 04:00:31.664208   55957 out.go:179] * Done! kubectl is now configured to use "embed-certs-244717" cluster and "default" namespace by default
	I1219 04:00:29.508455   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:30.007925   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:30.507876   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:31.007766   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:31.509691   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:32.008321   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:32.509128   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:33.008511   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:33.509110   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:34.008834   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:34.508661   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:35.009145   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:35.510268   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:36.007810   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:36.508457   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:37.008511   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:37.508340   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:38.008906   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:38.508226   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:39.007515   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:39.508398   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:40.008048   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:40.507411   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:41.008044   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:41.509491   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:42.008720   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:42.508893   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:43.008890   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:43.507746   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:44.008735   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:44.508515   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:45.008316   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:45.508925   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:46.007410   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:46.507809   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:47.007816   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:47.507934   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:48.008317   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:48.511438   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:49.008355   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:49.508479   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:50.008867   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:50.507492   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:51.008220   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:51.508283   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:52.008800   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:52.508617   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:53.008464   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:53.508878   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:54.008198   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:54.509007   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:55.007919   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:55.507118   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:56.008201   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:56.507989   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:57.007872   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:57.508142   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:58.008008   56230 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:00:58.504601   56230 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 04:00:58.504633   56230 kapi.go:107] duration metric: took 6m0.000289249s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 04:00:58.504722   56230 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 04:00:58.506261   56230 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1219 04:00:58.507432   56230 addons.go:546] duration metric: took 6m6.536744168s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1219 04:00:58.507471   56230 start.go:247] waiting for cluster config update ...
	I1219 04:00:58.507487   56230 start.go:256] writing updated cluster config ...
	I1219 04:00:58.507818   56230 ssh_runner.go:195] Run: rm -f paused
	I1219 04:00:58.516094   56230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:00:58.521203   56230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dnfcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.526011   56230 pod_ready.go:94] pod "coredns-66bc5c9577-dnfcc" is "Ready"
	I1219 04:00:58.526035   56230 pod_ready.go:86] duration metric: took 4.809568ms for pod "coredns-66bc5c9577-dnfcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.528592   56230 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.534102   56230 pod_ready.go:94] pod "etcd-default-k8s-diff-port-168174" is "Ready"
	I1219 04:00:58.534119   56230 pod_ready.go:86] duration metric: took 5.507213ms for pod "etcd-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.536078   56230 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.540931   56230 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-168174" is "Ready"
	I1219 04:00:58.540951   56230 pod_ready.go:86] duration metric: took 4.854792ms for pod "kube-apiserver-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.542905   56230 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:58.920520   56230 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-168174" is "Ready"
	I1219 04:00:58.920546   56230 pod_ready.go:86] duration metric: took 377.623833ms for pod "kube-controller-manager-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:59.120738   56230 pod_ready.go:83] waiting for pod "kube-proxy-zs4wg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:59.520222   56230 pod_ready.go:94] pod "kube-proxy-zs4wg" is "Ready"
	I1219 04:00:59.520254   56230 pod_ready.go:86] duration metric: took 399.487462ms for pod "kube-proxy-zs4wg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:00:59.721383   56230 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:01:00.120982   56230 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-168174" is "Ready"
	I1219 04:01:00.121009   56230 pod_ready.go:86] duration metric: took 399.598924ms for pod "kube-scheduler-default-k8s-diff-port-168174" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 04:01:00.121020   56230 pod_ready.go:40] duration metric: took 1.604899766s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 04:01:00.167943   56230 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 04:01:00.169437   56230 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-168174" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.406597904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117519406565146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:198516,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc8e5dd9-17ee-4406-bbf1-0a25190fc62a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.407575832Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9da4230-2e48-4cda-a314-0fde0c2a1df4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.407667231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9da4230-2e48-4cda-a314-0fde0c2a1df4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.408005165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ba53a084d3418609c316a45fc2fb31ecffa561d929d3c31c11897926152f7f7,PodSandboxId:1d41e1adeda344c2b6bf312ecd196cb3f294b1dd0940b74440188e728e4f8236,Metadata:&ContainerMetadata{Name:proxy,Attempt:0,},Image:&ImageSpec{Image:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_RUNNING,CreatedAt:1766116455714758363,Labels:map[string]string{io.kubernetes.container.name: proxy,io.kubernetes.pod.name: kubernetes-dashboard-kong-f487b85cd-6h64p,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ce63b698-0626-4cdb-9035-f9f905d770cf,},Annotations:map[string]string{io.kubernetes.container.hash: f625957b,io.kubernetes.container.ports: [{\"name\":\"proxy-
tls\",\"containerPort\":8443,\"protocol\":\"TCP\"},{\"name\":\"status\",\"containerPort\":8100,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"kong\",\"quit\",\"--wait=15\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b3a5531151deeda30e8690a92f8af8076432b199545941605702c1654d71ca,PodSandboxId:1d41e1adeda344c2b6bf312ecd196cb3f294b1dd0940b74440188e728e4f8236,Metadata:&ContainerMetadata{Name:clear-stale-pid,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_EXITED,CreatedAt:1766116454700263443,Labels:map[string]string{io.kubernetes.container.name: clear-
stale-pid,io.kubernetes.pod.name: kubernetes-dashboard-kong-f487b85cd-6h64p,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ce63b698-0626-4cdb-9035-f9f905d770cf,},Annotations:map[string]string{io.kubernetes.container.hash: fc5d9912,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d52d8be50dac733ff4ec86c05667c686a18577fa43f59488bb796d1a5f17cc,PodSandboxId:d8561e78594775abe17c477f444bda5ccdf4feec623c89b0dd77dd900a2cf47a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116443517547989,Labels:map[string]string{io.kubernetes.contain
er.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6361f87-8123-46eb-8f28-110eb02b1927,},Annotations:map[string]string{io.kubernetes.container.hash: 158eaf00,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278a539192a9e7c904fb20462e7177dd70e833a4bcfc50bfda225435b77cdac4,PodSandboxId:ac4f5da2781b4b44297b4d7adecf74ff7c3ad33beaee16395c5e4882e53eed9f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-api,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0607af4fcd8ae78708c5bd51f34ccf8442967b8e41f56f008cc2884690f2f3b,State:CONTAINER_RUNNING,CreatedAt:1766116442753753567,Labels:ma
p[string]string{io.kubernetes.container.name: kubernetes-dashboard-api,io.kubernetes.pod.name: kubernetes-dashboard-api-56d75ddbb-tppfn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 185d3f92-b7c0-43d3-a6f5-b9d52d8d41c0,},Annotations:map[string]string{io.kubernetes.container.hash: 547c95c6,io.kubernetes.container.ports: [{\"name\":\"api\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c52846cd4715f53aa4a5f474e43006e8c5483d90b1849df8dc0075dff1b6083c,PodSandboxId:88153576fecac10bb0a65a4a0730fde12e8880a90d8a049f71fef56dd129ead9,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-auth,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1,State:CONTAINER_RUNNING,CreatedAt:1766116439276415330,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-auth,io.kubernetes.pod.name: kubernetes-dashboard-auth-84ff87fdd5-zd9bz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1c55220d-544f-4947-b5c5-afdb85361029,},Annotations:map[string]string{io.kubernetes.container.hash: e1c595d7,io.kubernetes.container.ports: [{\"name\":\"auth\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633226fb3e30d41ed72df6ade31838a480d47853a5dd02fecf2e176fc3b823b6,PodSandboxId:f31a88a220665e883c5e31493836838da73f09f4c18d8866a75898e7c9c7feb4,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-metrics-scrap
er,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d9cbc9f4053ca11fa6edb814b512ab807a89346934700a3a425fc1e24a3f2167,State:CONTAINER_RUNNING,CreatedAt:1766116436190115082,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-metrics-scraper,io.kubernetes.pod.name: kubernetes-dashboard-metrics-scraper-6b5c7dc479-5rsmg,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 43deb4b4-c100-4378-917c-1eefb131c216,},Annotations:map[string]string{io.kubernetes.container.hash: 33f81994,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1bca547a
8cb13056ce6f41cd3d2827332c49678081ed4fdf1e5f07e548e8e9,PodSandboxId:5896d1e86f1716bc991865708d89f4e07225e965a5eae42250327484c19431a8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-web,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06,State:CONTAINER_RUNNING,CreatedAt:1766116432981249997,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-web,io.kubernetes.pod.name: kubernetes-dashboard-web-858bd7466-c5kzr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 5ae4b527-2bef-4144-a371-dc2f4116a3c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1b2182b9,io.kubernetes.container.ports: [{\"name\":\"web\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796faeaf498a0cf66778ac0bfdca4997a21c3fa259a33a7e4ed26351885ee0c9,PodSandboxId:7b093825f1fe4dd66c14ac2c10e7605906deab68f3acb0588b21657d6fea391e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116423691471037,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e1fc5999-4caf-496d-a302-707570e1f019,},Annotations:map[string]string{io.kubernetes.container.hash: c60f0de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25458f0b01a86f841f0b80d9682d174573303f6a35d4a678193134bd34277761,PodSandboxId:fb0cbab3c54c59215d3f9b0027ec49fe8b34334a5cff2f1f4397a3758d14a620,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1766116420263737361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwzpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4598eae-ba53-4173-bc4a-535887dc6d10,},Annotations:map[string]string{io.kubernetes.container.hash: fffff994,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c23bdc763b01672e6cb7e934236bc23e89892665c4518cce1a92bfd3230cd50,PodSandboxId:d8561e78594775abe17c477f444bda5ccdf4feec623c89b0dd77dd900a2cf47a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116412592374221,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6361f87-812
3-46eb-8f28-110eb02b1927,},Annotations:map[string]string{io.kubernetes.container.hash: 158eaf00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba408cece6208ddeb9327287a76e8a3d4f03bcc10c4b233b10f9959864c891b5,PodSandboxId:4d2c29ff4a2ed8cef6afbbd6aae68bc1475bd43ce62d8b9ac7d89237ae86f824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a,State:CONTAINER_RUNNING,CreatedAt:1766116412573827827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4c59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d1adab-9314-4395-95f2-1ca383aefee1,},Annota
tions:map[string]string{io.kubernetes.container.hash: a49c72a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f352655401bed294bcff033ea5d2aa2a0bdf87de5fbb1206e8731b680ce582c,PodSandboxId:2ed304f66f5d1100845fc56cd8d93da79cff42aca9a0c55c230af2194535cb4c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157,State:CONTAINER_RUNNING,CreatedAt:1766116408686391475,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb606b3e2663e111185ab803de8836,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2df5c40d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6833537f6aeed3fadb0f5baf15f935306cc5b6f0468e6e56b3922330e158ee,PodSandboxId:022dd0dc606224b9ea84f65a61d92ae969d481f95b322acbe2df9b403d830644,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1766116408697700650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31afd2a7541d649fb566c092b7f15410,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 5ddae9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d28987fec36e57788da0534ff938f4677376fcea5c129e696240ae6fc0ed015,PodSandboxId:31bb13c0703b5ea7fbb41944a5204406197098847c44617acb41400f918cd15d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95,State:CONTAINER_RUNNING,CreatedAt:1766116408617770483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b78f15083684b811afd94e8a9518e12,},Annotations:map[string]string{io.kubernetes.container.hash:
404b5e1f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00fed501023e3502b030b4bb79ef632a05bce26dfa057b799045368d55f79e28,PodSandboxId:3cf899f74093f994cd317a64f629e1e64bf7e1d45d395ad5b5d6acfedd333e46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62,State:CONTAINER_RUNNING,CreatedAt:1766116408638461352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5474cd52daad06c999fd54796fd62f6,},Annotations:map[string]string{io.kubernet
es.container.hash: 78dfeacb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9da4230-2e48-4cda-a314-0fde0c2a1df4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.443195893Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28b03778-8a4e-428d-b1e8-5bf692b94aab name=/runtime.v1.RuntimeService/Version
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.443523681Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28b03778-8a4e-428d-b1e8-5bf692b94aab name=/runtime.v1.RuntimeService/Version
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.445277397Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=89e23262-7b82-4d8a-beb8-d26eb252f26d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.446104700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117519446082899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:198516,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89e23262-7b82-4d8a-beb8-d26eb252f26d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.446886073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=751769e5-72a4-477c-99c8-ce8dbb3ba16f name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.446940588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=751769e5-72a4-477c-99c8-ce8dbb3ba16f name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.447201765Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ba53a084d3418609c316a45fc2fb31ecffa561d929d3c31c11897926152f7f7,PodSandboxId:1d41e1adeda344c2b6bf312ecd196cb3f294b1dd0940b74440188e728e4f8236,Metadata:&ContainerMetadata{Name:proxy,Attempt:0,},Image:&ImageSpec{Image:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_RUNNING,CreatedAt:1766116455714758363,Labels:map[string]string{io.kubernetes.container.name: proxy,io.kubernetes.pod.name: kubernetes-dashboard-kong-f487b85cd-6h64p,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ce63b698-0626-4cdb-9035-f9f905d770cf,},Annotations:map[string]string{io.kubernetes.container.hash: f625957b,io.kubernetes.container.ports: [{\"name\":\"proxy-
tls\",\"containerPort\":8443,\"protocol\":\"TCP\"},{\"name\":\"status\",\"containerPort\":8100,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"kong\",\"quit\",\"--wait=15\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b3a5531151deeda30e8690a92f8af8076432b199545941605702c1654d71ca,PodSandboxId:1d41e1adeda344c2b6bf312ecd196cb3f294b1dd0940b74440188e728e4f8236,Metadata:&ContainerMetadata{Name:clear-stale-pid,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_EXITED,CreatedAt:1766116454700263443,Labels:map[string]string{io.kubernetes.container.name: clear-
stale-pid,io.kubernetes.pod.name: kubernetes-dashboard-kong-f487b85cd-6h64p,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ce63b698-0626-4cdb-9035-f9f905d770cf,},Annotations:map[string]string{io.kubernetes.container.hash: fc5d9912,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d52d8be50dac733ff4ec86c05667c686a18577fa43f59488bb796d1a5f17cc,PodSandboxId:d8561e78594775abe17c477f444bda5ccdf4feec623c89b0dd77dd900a2cf47a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116443517547989,Labels:map[string]string{io.kubernetes.contain
er.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6361f87-8123-46eb-8f28-110eb02b1927,},Annotations:map[string]string{io.kubernetes.container.hash: 158eaf00,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278a539192a9e7c904fb20462e7177dd70e833a4bcfc50bfda225435b77cdac4,PodSandboxId:ac4f5da2781b4b44297b4d7adecf74ff7c3ad33beaee16395c5e4882e53eed9f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-api,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0607af4fcd8ae78708c5bd51f34ccf8442967b8e41f56f008cc2884690f2f3b,State:CONTAINER_RUNNING,CreatedAt:1766116442753753567,Labels:ma
p[string]string{io.kubernetes.container.name: kubernetes-dashboard-api,io.kubernetes.pod.name: kubernetes-dashboard-api-56d75ddbb-tppfn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 185d3f92-b7c0-43d3-a6f5-b9d52d8d41c0,},Annotations:map[string]string{io.kubernetes.container.hash: 547c95c6,io.kubernetes.container.ports: [{\"name\":\"api\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c52846cd4715f53aa4a5f474e43006e8c5483d90b1849df8dc0075dff1b6083c,PodSandboxId:88153576fecac10bb0a65a4a0730fde12e8880a90d8a049f71fef56dd129ead9,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-auth,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1,State:CONTAINER_RUNNING,CreatedAt:1766116439276415330,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-auth,io.kubernetes.pod.name: kubernetes-dashboard-auth-84ff87fdd5-zd9bz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1c55220d-544f-4947-b5c5-afdb85361029,},Annotations:map[string]string{io.kubernetes.container.hash: e1c595d7,io.kubernetes.container.ports: [{\"name\":\"auth\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633226fb3e30d41ed72df6ade31838a480d47853a5dd02fecf2e176fc3b823b6,PodSandboxId:f31a88a220665e883c5e31493836838da73f09f4c18d8866a75898e7c9c7feb4,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-metrics-scrap
er,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d9cbc9f4053ca11fa6edb814b512ab807a89346934700a3a425fc1e24a3f2167,State:CONTAINER_RUNNING,CreatedAt:1766116436190115082,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-metrics-scraper,io.kubernetes.pod.name: kubernetes-dashboard-metrics-scraper-6b5c7dc479-5rsmg,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 43deb4b4-c100-4378-917c-1eefb131c216,},Annotations:map[string]string{io.kubernetes.container.hash: 33f81994,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1bca547a
8cb13056ce6f41cd3d2827332c49678081ed4fdf1e5f07e548e8e9,PodSandboxId:5896d1e86f1716bc991865708d89f4e07225e965a5eae42250327484c19431a8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-web,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06,State:CONTAINER_RUNNING,CreatedAt:1766116432981249997,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-web,io.kubernetes.pod.name: kubernetes-dashboard-web-858bd7466-c5kzr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 5ae4b527-2bef-4144-a371-dc2f4116a3c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1b2182b9,io.kubernetes.container.ports: [{\"name\":\"web\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796faeaf498a0cf66778ac0bfdca4997a21c3fa259a33a7e4ed26351885ee0c9,PodSandboxId:7b093825f1fe4dd66c14ac2c10e7605906deab68f3acb0588b21657d6fea391e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116423691471037,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e1fc5999-4caf-496d-a302-707570e1f019,},Annotations:map[string]string{io.kubernetes.container.hash: c60f0de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25458f0b01a86f841f0b80d9682d174573303f6a35d4a678193134bd34277761,PodSandboxId:fb0cbab3c54c59215d3f9b0027ec49fe8b34334a5cff2f1f4397a3758d14a620,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1766116420263737361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwzpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4598eae-ba53-4173-bc4a-535887dc6d10,},Annotations:map[string]string{io.kubernetes.container.hash: fffff994,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c23bdc763b01672e6cb7e934236bc23e89892665c4518cce1a92bfd3230cd50,PodSandboxId:d8561e78594775abe17c477f444bda5ccdf4feec623c89b0dd77dd900a2cf47a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116412592374221,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6361f87-812
3-46eb-8f28-110eb02b1927,},Annotations:map[string]string{io.kubernetes.container.hash: 158eaf00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba408cece6208ddeb9327287a76e8a3d4f03bcc10c4b233b10f9959864c891b5,PodSandboxId:4d2c29ff4a2ed8cef6afbbd6aae68bc1475bd43ce62d8b9ac7d89237ae86f824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a,State:CONTAINER_RUNNING,CreatedAt:1766116412573827827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4c59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d1adab-9314-4395-95f2-1ca383aefee1,},Annota
tions:map[string]string{io.kubernetes.container.hash: a49c72a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f352655401bed294bcff033ea5d2aa2a0bdf87de5fbb1206e8731b680ce582c,PodSandboxId:2ed304f66f5d1100845fc56cd8d93da79cff42aca9a0c55c230af2194535cb4c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157,State:CONTAINER_RUNNING,CreatedAt:1766116408686391475,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb606b3e2663e111185ab803de8836,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2df5c40d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6833537f6aeed3fadb0f5baf15f935306cc5b6f0468e6e56b3922330e158ee,PodSandboxId:022dd0dc606224b9ea84f65a61d92ae969d481f95b322acbe2df9b403d830644,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1766116408697700650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31afd2a7541d649fb566c092b7f15410,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 5ddae9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d28987fec36e57788da0534ff938f4677376fcea5c129e696240ae6fc0ed015,PodSandboxId:31bb13c0703b5ea7fbb41944a5204406197098847c44617acb41400f918cd15d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95,State:CONTAINER_RUNNING,CreatedAt:1766116408617770483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b78f15083684b811afd94e8a9518e12,},Annotations:map[string]string{io.kubernetes.container.hash:
404b5e1f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00fed501023e3502b030b4bb79ef632a05bce26dfa057b799045368d55f79e28,PodSandboxId:3cf899f74093f994cd317a64f629e1e64bf7e1d45d395ad5b5d6acfedd333e46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62,State:CONTAINER_RUNNING,CreatedAt:1766116408638461352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5474cd52daad06c999fd54796fd62f6,},Annotations:map[string]string{io.kubernet
es.container.hash: 78dfeacb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=751769e5-72a4-477c-99c8-ce8dbb3ba16f name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.477315020Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5bca268-f7dc-40a9-b281-06b19f2dfaab name=/runtime.v1.RuntimeService/Version
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.477461916Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5bca268-f7dc-40a9-b281-06b19f2dfaab name=/runtime.v1.RuntimeService/Version
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.479296957Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8b07985-e405-4e1a-b12d-cea3350d6f9a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.480505644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117519480416286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:198516,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8b07985-e405-4e1a-b12d-cea3350d6f9a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.481879084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33a8e9c2-9032-4270-9325-1dbb93a8ddac name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.481943364Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33a8e9c2-9032-4270-9325-1dbb93a8ddac name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.482213852Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ba53a084d3418609c316a45fc2fb31ecffa561d929d3c31c11897926152f7f7,PodSandboxId:1d41e1adeda344c2b6bf312ecd196cb3f294b1dd0940b74440188e728e4f8236,Metadata:&ContainerMetadata{Name:proxy,Attempt:0,},Image:&ImageSpec{Image:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_RUNNING,CreatedAt:1766116455714758363,Labels:map[string]string{io.kubernetes.container.name: proxy,io.kubernetes.pod.name: kubernetes-dashboard-kong-f487b85cd-6h64p,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ce63b698-0626-4cdb-9035-f9f905d770cf,},Annotations:map[string]string{io.kubernetes.container.hash: f625957b,io.kubernetes.container.ports: [{\"name\":\"proxy-
tls\",\"containerPort\":8443,\"protocol\":\"TCP\"},{\"name\":\"status\",\"containerPort\":8100,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"kong\",\"quit\",\"--wait=15\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b3a5531151deeda30e8690a92f8af8076432b199545941605702c1654d71ca,PodSandboxId:1d41e1adeda344c2b6bf312ecd196cb3f294b1dd0940b74440188e728e4f8236,Metadata:&ContainerMetadata{Name:clear-stale-pid,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_EXITED,CreatedAt:1766116454700263443,Labels:map[string]string{io.kubernetes.container.name: clear-
stale-pid,io.kubernetes.pod.name: kubernetes-dashboard-kong-f487b85cd-6h64p,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ce63b698-0626-4cdb-9035-f9f905d770cf,},Annotations:map[string]string{io.kubernetes.container.hash: fc5d9912,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d52d8be50dac733ff4ec86c05667c686a18577fa43f59488bb796d1a5f17cc,PodSandboxId:d8561e78594775abe17c477f444bda5ccdf4feec623c89b0dd77dd900a2cf47a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116443517547989,Labels:map[string]string{io.kubernetes.contain
er.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6361f87-8123-46eb-8f28-110eb02b1927,},Annotations:map[string]string{io.kubernetes.container.hash: 158eaf00,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278a539192a9e7c904fb20462e7177dd70e833a4bcfc50bfda225435b77cdac4,PodSandboxId:ac4f5da2781b4b44297b4d7adecf74ff7c3ad33beaee16395c5e4882e53eed9f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-api,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0607af4fcd8ae78708c5bd51f34ccf8442967b8e41f56f008cc2884690f2f3b,State:CONTAINER_RUNNING,CreatedAt:1766116442753753567,Labels:ma
p[string]string{io.kubernetes.container.name: kubernetes-dashboard-api,io.kubernetes.pod.name: kubernetes-dashboard-api-56d75ddbb-tppfn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 185d3f92-b7c0-43d3-a6f5-b9d52d8d41c0,},Annotations:map[string]string{io.kubernetes.container.hash: 547c95c6,io.kubernetes.container.ports: [{\"name\":\"api\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c52846cd4715f53aa4a5f474e43006e8c5483d90b1849df8dc0075dff1b6083c,PodSandboxId:88153576fecac10bb0a65a4a0730fde12e8880a90d8a049f71fef56dd129ead9,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-auth,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1,State:CONTAINER_RUNNING,CreatedAt:1766116439276415330,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-auth,io.kubernetes.pod.name: kubernetes-dashboard-auth-84ff87fdd5-zd9bz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1c55220d-544f-4947-b5c5-afdb85361029,},Annotations:map[string]string{io.kubernetes.container.hash: e1c595d7,io.kubernetes.container.ports: [{\"name\":\"auth\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633226fb3e30d41ed72df6ade31838a480d47853a5dd02fecf2e176fc3b823b6,PodSandboxId:f31a88a220665e883c5e31493836838da73f09f4c18d8866a75898e7c9c7feb4,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-metrics-scrap
er,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d9cbc9f4053ca11fa6edb814b512ab807a89346934700a3a425fc1e24a3f2167,State:CONTAINER_RUNNING,CreatedAt:1766116436190115082,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-metrics-scraper,io.kubernetes.pod.name: kubernetes-dashboard-metrics-scraper-6b5c7dc479-5rsmg,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 43deb4b4-c100-4378-917c-1eefb131c216,},Annotations:map[string]string{io.kubernetes.container.hash: 33f81994,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1bca547a
8cb13056ce6f41cd3d2827332c49678081ed4fdf1e5f07e548e8e9,PodSandboxId:5896d1e86f1716bc991865708d89f4e07225e965a5eae42250327484c19431a8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-web,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06,State:CONTAINER_RUNNING,CreatedAt:1766116432981249997,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-web,io.kubernetes.pod.name: kubernetes-dashboard-web-858bd7466-c5kzr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 5ae4b527-2bef-4144-a371-dc2f4116a3c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1b2182b9,io.kubernetes.container.ports: [{\"name\":\"web\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796faeaf498a0cf66778ac0bfdca4997a21c3fa259a33a7e4ed26351885ee0c9,PodSandboxId:7b093825f1fe4dd66c14ac2c10e7605906deab68f3acb0588b21657d6fea391e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116423691471037,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e1fc5999-4caf-496d-a302-707570e1f019,},Annotations:map[string]string{io.kubernetes.container.hash: c60f0de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25458f0b01a86f841f0b80d9682d174573303f6a35d4a678193134bd34277761,PodSandboxId:fb0cbab3c54c59215d3f9b0027ec49fe8b34334a5cff2f1f4397a3758d14a620,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1766116420263737361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwzpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4598eae-ba53-4173-bc4a-535887dc6d10,},Annotations:map[string]string{io.kubernetes.container.hash: fffff994,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c23bdc763b01672e6cb7e934236bc23e89892665c4518cce1a92bfd3230cd50,PodSandboxId:d8561e78594775abe17c477f444bda5ccdf4feec623c89b0dd77dd900a2cf47a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116412592374221,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6361f87-812
3-46eb-8f28-110eb02b1927,},Annotations:map[string]string{io.kubernetes.container.hash: 158eaf00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba408cece6208ddeb9327287a76e8a3d4f03bcc10c4b233b10f9959864c891b5,PodSandboxId:4d2c29ff4a2ed8cef6afbbd6aae68bc1475bd43ce62d8b9ac7d89237ae86f824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a,State:CONTAINER_RUNNING,CreatedAt:1766116412573827827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4c59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d1adab-9314-4395-95f2-1ca383aefee1,},Annota
tions:map[string]string{io.kubernetes.container.hash: a49c72a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f352655401bed294bcff033ea5d2aa2a0bdf87de5fbb1206e8731b680ce582c,PodSandboxId:2ed304f66f5d1100845fc56cd8d93da79cff42aca9a0c55c230af2194535cb4c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157,State:CONTAINER_RUNNING,CreatedAt:1766116408686391475,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb606b3e2663e111185ab803de8836,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2df5c40d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6833537f6aeed3fadb0f5baf15f935306cc5b6f0468e6e56b3922330e158ee,PodSandboxId:022dd0dc606224b9ea84f65a61d92ae969d481f95b322acbe2df9b403d830644,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1766116408697700650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31afd2a7541d649fb566c092b7f15410,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 5ddae9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d28987fec36e57788da0534ff938f4677376fcea5c129e696240ae6fc0ed015,PodSandboxId:31bb13c0703b5ea7fbb41944a5204406197098847c44617acb41400f918cd15d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95,State:CONTAINER_RUNNING,CreatedAt:1766116408617770483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b78f15083684b811afd94e8a9518e12,},Annotations:map[string]string{io.kubernetes.container.hash:
404b5e1f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00fed501023e3502b030b4bb79ef632a05bce26dfa057b799045368d55f79e28,PodSandboxId:3cf899f74093f994cd317a64f629e1e64bf7e1d45d395ad5b5d6acfedd333e46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62,State:CONTAINER_RUNNING,CreatedAt:1766116408638461352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5474cd52daad06c999fd54796fd62f6,},Annotations:map[string]string{io.kubernet
es.container.hash: 78dfeacb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33a8e9c2-9032-4270-9325-1dbb93a8ddac name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.529136837Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bdbaab22-e947-4112-9073-c23ce95873eb name=/runtime.v1.RuntimeService/Version
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.529414678Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bdbaab22-e947-4112-9073-c23ce95873eb name=/runtime.v1.RuntimeService/Version
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.531305426Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2441696d-df14-43f8-89ac-f9a1d894542a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.531899353Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117519531832761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:198516,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2441696d-df14-43f8-89ac-f9a1d894542a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.532713522Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df174dc0-da62-476f-9a11-f9bd0b2c2b27 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.532765273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df174dc0-da62-476f-9a11-f9bd0b2c2b27 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:11:59 old-k8s-version-094166 crio[884]: time="2025-12-19 04:11:59.533102548Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ba53a084d3418609c316a45fc2fb31ecffa561d929d3c31c11897926152f7f7,PodSandboxId:1d41e1adeda344c2b6bf312ecd196cb3f294b1dd0940b74440188e728e4f8236,Metadata:&ContainerMetadata{Name:proxy,Attempt:0,},Image:&ImageSpec{Image:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_RUNNING,CreatedAt:1766116455714758363,Labels:map[string]string{io.kubernetes.container.name: proxy,io.kubernetes.pod.name: kubernetes-dashboard-kong-f487b85cd-6h64p,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ce63b698-0626-4cdb-9035-f9f905d770cf,},Annotations:map[string]string{io.kubernetes.container.hash: f625957b,io.kubernetes.container.ports: [{\"name\":\"proxy-
tls\",\"containerPort\":8443,\"protocol\":\"TCP\"},{\"name\":\"status\",\"containerPort\":8100,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"kong\",\"quit\",\"--wait=15\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b3a5531151deeda30e8690a92f8af8076432b199545941605702c1654d71ca,PodSandboxId:1d41e1adeda344c2b6bf312ecd196cb3f294b1dd0940b74440188e728e4f8236,Metadata:&ContainerMetadata{Name:clear-stale-pid,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_EXITED,CreatedAt:1766116454700263443,Labels:map[string]string{io.kubernetes.container.name: clear-
stale-pid,io.kubernetes.pod.name: kubernetes-dashboard-kong-f487b85cd-6h64p,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ce63b698-0626-4cdb-9035-f9f905d770cf,},Annotations:map[string]string{io.kubernetes.container.hash: fc5d9912,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d52d8be50dac733ff4ec86c05667c686a18577fa43f59488bb796d1a5f17cc,PodSandboxId:d8561e78594775abe17c477f444bda5ccdf4feec623c89b0dd77dd900a2cf47a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116443517547989,Labels:map[string]string{io.kubernetes.contain
er.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6361f87-8123-46eb-8f28-110eb02b1927,},Annotations:map[string]string{io.kubernetes.container.hash: 158eaf00,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278a539192a9e7c904fb20462e7177dd70e833a4bcfc50bfda225435b77cdac4,PodSandboxId:ac4f5da2781b4b44297b4d7adecf74ff7c3ad33beaee16395c5e4882e53eed9f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-api,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0607af4fcd8ae78708c5bd51f34ccf8442967b8e41f56f008cc2884690f2f3b,State:CONTAINER_RUNNING,CreatedAt:1766116442753753567,Labels:ma
p[string]string{io.kubernetes.container.name: kubernetes-dashboard-api,io.kubernetes.pod.name: kubernetes-dashboard-api-56d75ddbb-tppfn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 185d3f92-b7c0-43d3-a6f5-b9d52d8d41c0,},Annotations:map[string]string{io.kubernetes.container.hash: 547c95c6,io.kubernetes.container.ports: [{\"name\":\"api\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c52846cd4715f53aa4a5f474e43006e8c5483d90b1849df8dc0075dff1b6083c,PodSandboxId:88153576fecac10bb0a65a4a0730fde12e8880a90d8a049f71fef56dd129ead9,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-auth,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1,State:CONTAINER_RUNNING,CreatedAt:1766116439276415330,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-auth,io.kubernetes.pod.name: kubernetes-dashboard-auth-84ff87fdd5-zd9bz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1c55220d-544f-4947-b5c5-afdb85361029,},Annotations:map[string]string{io.kubernetes.container.hash: e1c595d7,io.kubernetes.container.ports: [{\"name\":\"auth\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633226fb3e30d41ed72df6ade31838a480d47853a5dd02fecf2e176fc3b823b6,PodSandboxId:f31a88a220665e883c5e31493836838da73f09f4c18d8866a75898e7c9c7feb4,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-metrics-scrap
er,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d9cbc9f4053ca11fa6edb814b512ab807a89346934700a3a425fc1e24a3f2167,State:CONTAINER_RUNNING,CreatedAt:1766116436190115082,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-metrics-scraper,io.kubernetes.pod.name: kubernetes-dashboard-metrics-scraper-6b5c7dc479-5rsmg,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 43deb4b4-c100-4378-917c-1eefb131c216,},Annotations:map[string]string{io.kubernetes.container.hash: 33f81994,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1bca547a
8cb13056ce6f41cd3d2827332c49678081ed4fdf1e5f07e548e8e9,PodSandboxId:5896d1e86f1716bc991865708d89f4e07225e965a5eae42250327484c19431a8,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-web,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06,State:CONTAINER_RUNNING,CreatedAt:1766116432981249997,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-web,io.kubernetes.pod.name: kubernetes-dashboard-web-858bd7466-c5kzr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 5ae4b527-2bef-4144-a371-dc2f4116a3c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1b2182b9,io.kubernetes.container.ports: [{\"name\":\"web\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796faeaf498a0cf66778ac0bfdca4997a21c3fa259a33a7e4ed26351885ee0c9,PodSandboxId:7b093825f1fe4dd66c14ac2c10e7605906deab68f3acb0588b21657d6fea391e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116423691471037,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e1fc5999-4caf-496d-a302-707570e1f019,},Annotations:map[string]string{io.kubernetes.container.hash: c60f0de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25458f0b01a86f841f0b80d9682d174573303f6a35d4a678193134bd34277761,PodSandboxId:fb0cbab3c54c59215d3f9b0027ec49fe8b34334a5cff2f1f4397a3758d14a620,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1766116420263737361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwzpn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4598eae-ba53-4173-bc4a-535887dc6d10,},Annotations:map[string]string{io.kubernetes.container.hash: fffff994,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c23bdc763b01672e6cb7e934236bc23e89892665c4518cce1a92bfd3230cd50,PodSandboxId:d8561e78594775abe17c477f444bda5ccdf4feec623c89b0dd77dd900a2cf47a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116412592374221,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6361f87-812
3-46eb-8f28-110eb02b1927,},Annotations:map[string]string{io.kubernetes.container.hash: 158eaf00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba408cece6208ddeb9327287a76e8a3d4f03bcc10c4b233b10f9959864c891b5,PodSandboxId:4d2c29ff4a2ed8cef6afbbd6aae68bc1475bd43ce62d8b9ac7d89237ae86f824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a,State:CONTAINER_RUNNING,CreatedAt:1766116412573827827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4c59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d1adab-9314-4395-95f2-1ca383aefee1,},Annota
tions:map[string]string{io.kubernetes.container.hash: a49c72a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f352655401bed294bcff033ea5d2aa2a0bdf87de5fbb1206e8731b680ce582c,PodSandboxId:2ed304f66f5d1100845fc56cd8d93da79cff42aca9a0c55c230af2194535cb4c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157,State:CONTAINER_RUNNING,CreatedAt:1766116408686391475,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb606b3e2663e111185ab803de8836,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2df5c40d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6833537f6aeed3fadb0f5baf15f935306cc5b6f0468e6e56b3922330e158ee,PodSandboxId:022dd0dc606224b9ea84f65a61d92ae969d481f95b322acbe2df9b403d830644,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1766116408697700650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31afd2a7541d649fb566c092b7f15410,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 5ddae9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d28987fec36e57788da0534ff938f4677376fcea5c129e696240ae6fc0ed015,PodSandboxId:31bb13c0703b5ea7fbb41944a5204406197098847c44617acb41400f918cd15d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95,State:CONTAINER_RUNNING,CreatedAt:1766116408617770483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b78f15083684b811afd94e8a9518e12,},Annotations:map[string]string{io.kubernetes.container.hash:
404b5e1f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00fed501023e3502b030b4bb79ef632a05bce26dfa057b799045368d55f79e28,PodSandboxId:3cf899f74093f994cd317a64f629e1e64bf7e1d45d395ad5b5d6acfedd333e46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62,State:CONTAINER_RUNNING,CreatedAt:1766116408638461352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-094166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5474cd52daad06c999fd54796fd62f6,},Annotations:map[string]string{io.kubernet
es.container.hash: 78dfeacb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df174dc0-da62-476f-9a11-f9bd0b2c2b27 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	4ba53a084d341       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           17 minutes ago      Running             proxy                                  0                   1d41e1adeda34       kubernetes-dashboard-kong-f487b85cd-6h64p               kubernetes-dashboard
	29b3a5531151d       docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29                             17 minutes ago      Exited              clear-stale-pid                        0                   1d41e1adeda34       kubernetes-dashboard-kong-f487b85cd-6h64p               kubernetes-dashboard
	30d52d8be50da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           17 minutes ago      Running             storage-provisioner                    3                   d8561e7859477       storage-provisioner                                     kube-system
	278a539192a9e       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               17 minutes ago      Running             kubernetes-dashboard-api               0                   ac4f5da2781b4       kubernetes-dashboard-api-56d75ddbb-tppfn                kubernetes-dashboard
	c52846cd4715f       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              18 minutes ago      Running             kubernetes-dashboard-auth              0                   88153576fecac       kubernetes-dashboard-auth-84ff87fdd5-zd9bz              kubernetes-dashboard
	633226fb3e30d       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   18 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   f31a88a220665       kubernetes-dashboard-metrics-scraper-6b5c7dc479-5rsmg   kubernetes-dashboard
	6d1bca547a8cb       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               18 minutes ago      Running             kubernetes-dashboard-web               0                   5896d1e86f171       kubernetes-dashboard-web-858bd7466-c5kzr                kubernetes-dashboard
	796faeaf498a0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                        18 minutes ago      Running             busybox                                1                   7b093825f1fe4       busybox                                                 default
	25458f0b01a86       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                           18 minutes ago      Running             coredns                                1                   fb0cbab3c54c5       coredns-5dd5756b68-jwzpn                                kube-system
	9c23bdc763b01       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Exited              storage-provisioner                    2                   d8561e7859477       storage-provisioner                                     kube-system
	ba408cece6208       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                                           18 minutes ago      Running             kube-proxy                             1                   4d2c29ff4a2ed       kube-proxy-k4c59                                        kube-system
	cf6833537f6ae       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                           18 minutes ago      Running             etcd                                   1                   022dd0dc60622       etcd-old-k8s-version-094166                             kube-system
	9f352655401be       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                                           18 minutes ago      Running             kube-scheduler                         1                   2ed304f66f5d1       kube-scheduler-old-k8s-version-094166                   kube-system
	00fed501023e3       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                                           18 minutes ago      Running             kube-controller-manager                1                   3cf899f74093f       kube-controller-manager-old-k8s-version-094166          kube-system
	0d28987fec36e       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                                           18 minutes ago      Running             kube-apiserver                         1                   31bb13c0703b5       kube-apiserver-old-k8s-version-094166                   kube-system
	
	
	==> coredns [25458f0b01a86f841f0b80d9682d174573303f6a35d4a678193134bd34277761] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46353 - 10313 "HINFO IN 7031563663414278408.8956184294594618866. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01812941s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-094166
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-094166
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=old-k8s-version-094166
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_50_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:50:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-094166
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 04:11:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 04:09:53 +0000   Fri, 19 Dec 2025 03:50:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 04:09:53 +0000   Fri, 19 Dec 2025 03:50:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 04:09:53 +0000   Fri, 19 Dec 2025 03:50:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 04:09:53 +0000   Fri, 19 Dec 2025 03:53:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.65
	  Hostname:    old-k8s-version-094166
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 fcbb2c892246481388d54e88e69ff22c
	  System UUID:                fcbb2c89-2246-4813-88d5-4e88e69ff22c
	  Boot ID:                    05d4f12e-d326-4afb-9bcb-c16595fd1b4a
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-5dd5756b68-jwzpn                                 100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 etcd-old-k8s-version-094166                              100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         21m
	  kube-system                 kube-apiserver-old-k8s-version-094166                    250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-old-k8s-version-094166           200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-k4c59                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-old-k8s-version-094166                    100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-57f55c9bc5-9sqkf                          100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-api-56d75ddbb-tppfn                 100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-84ff87fdd5-zd9bz               100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-f487b85cd-6h64p                0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-6b5c7dc479-5rsmg    100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	  kubernetes-dashboard        kubernetes-dashboard-web-858bd7466-c5kzr                 100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1250m (62%)   1 (50%)
	  memory             1170Mi (39%)  1770Mi (59%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node old-k8s-version-094166 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node old-k8s-version-094166 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node old-k8s-version-094166 status is now: NodeHasSufficientPID
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node old-k8s-version-094166 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node old-k8s-version-094166 event: Registered Node old-k8s-version-094166 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node old-k8s-version-094166 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node old-k8s-version-094166 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node old-k8s-version-094166 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node old-k8s-version-094166 event: Registered Node old-k8s-version-094166 in Controller
	
	
	==> dmesg <==
	[Dec19 03:53] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000050] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003995] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.901908] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.128466] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.099578] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.498903] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.000048] kauditd_printk_skb: 122 callbacks suppressed
	[  +3.653682] kauditd_printk_skb: 143 callbacks suppressed
	[  +6.164427] kauditd_printk_skb: 204 callbacks suppressed
	[  +6.279079] kauditd_printk_skb: 32 callbacks suppressed
	[Dec19 03:54] kauditd_printk_skb: 47 callbacks suppressed
	[ +11.943764] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [cf6833537f6aeed3fadb0f5baf15f935306cc5b6f0468e6e56b3922330e158ee] <==
	{"level":"info","ts":"2025-12-19T03:54:14.482378Z","caller":"traceutil/trace.go:171","msg":"trace[835898818] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:792; }","duration":"647.220797ms","start":"2025-12-19T03:54:13.835147Z","end":"2025-12-19T03:54:14.482368Z","steps":["trace[835898818] 'agreement among raft nodes before linearized reading'  (duration: 646.816574ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:14.482418Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-19T03:54:13.835136Z","time spent":"647.271845ms","remote":"127.0.0.1:33578","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-12-19T03:54:17.012693Z","caller":"traceutil/trace.go:171","msg":"trace[919623372] linearizableReadLoop","detail":"{readStateIndex:858; appliedIndex:857; }","duration":"267.888131ms","start":"2025-12-19T03:54:16.744792Z","end":"2025-12-19T03:54:17.01268Z","steps":["trace[919623372] 'read index received'  (duration: 267.75147ms)","trace[919623372] 'applied index is now lower than readState.Index'  (duration: 136.325µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:54:17.012827Z","caller":"traceutil/trace.go:171","msg":"trace[1409116822] transaction","detail":"{read_only:false; response_revision:805; number_of_response:1; }","duration":"301.760136ms","start":"2025-12-19T03:54:16.711061Z","end":"2025-12-19T03:54:17.012821Z","steps":["trace[1409116822] 'process raft request'  (duration: 301.52474ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:17.013026Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.760958ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:54:17.013095Z","caller":"traceutil/trace.go:171","msg":"trace[1626339163] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:805; }","duration":"175.839162ms","start":"2025-12-19T03:54:16.837244Z","end":"2025-12-19T03:54:17.013083Z","steps":["trace[1626339163] 'agreement among raft nodes before linearized reading'  (duration: 175.741891ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:17.013048Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-19T03:54:16.71104Z","time spent":"301.914135ms","remote":"127.0.0.1:33780","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":13227,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-6h64p\" mod_revision:800 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-6h64p\" value_size:13142 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-6h64p\" > >"}
	{"level":"warn","ts":"2025-12-19T03:54:17.013315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"268.534942ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:5 size:31904"}
	{"level":"info","ts":"2025-12-19T03:54:17.013475Z","caller":"traceutil/trace.go:171","msg":"trace[412993667] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:5; response_revision:805; }","duration":"268.697504ms","start":"2025-12-19T03:54:16.744769Z","end":"2025-12-19T03:54:17.013466Z","steps":["trace[412993667] 'agreement among raft nodes before linearized reading'  (duration: 268.414353ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:54:45.374171Z","caller":"traceutil/trace.go:171","msg":"trace[22211896] transaction","detail":"{read_only:false; response_revision:840; number_of_response:1; }","duration":"120.475084ms","start":"2025-12-19T03:54:45.253682Z","end":"2025-12-19T03:54:45.374157Z","steps":["trace[22211896] 'process raft request'  (duration: 120.380289ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:54:46.10744Z","caller":"traceutil/trace.go:171","msg":"trace[184689208] linearizableReadLoop","detail":"{readStateIndex:901; appliedIndex:900; }","duration":"361.132161ms","start":"2025-12-19T03:54:45.746295Z","end":"2025-12-19T03:54:46.107427Z","steps":["trace[184689208] 'read index received'  (duration: 360.991851ms)","trace[184689208] 'applied index is now lower than readState.Index'  (duration: 139.846µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:54:46.107803Z","caller":"traceutil/trace.go:171","msg":"trace[122968497] transaction","detail":"{read_only:false; response_revision:841; number_of_response:1; }","duration":"619.87802ms","start":"2025-12-19T03:54:45.487915Z","end":"2025-12-19T03:54:46.107793Z","steps":["trace[122968497] 'process raft request'  (duration: 619.413842ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:46.107793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.644735ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-19T03:54:46.108034Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"361.759533ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:5 size:31730"}
	{"level":"info","ts":"2025-12-19T03:54:46.108Z","caller":"traceutil/trace.go:171","msg":"trace[735910079] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:841; }","duration":"272.86216ms","start":"2025-12-19T03:54:45.835127Z","end":"2025-12-19T03:54:46.107989Z","steps":["trace[735910079] 'agreement among raft nodes before linearized reading'  (duration: 272.588085ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:54:46.108075Z","caller":"traceutil/trace.go:171","msg":"trace[1979747885] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:5; response_revision:841; }","duration":"361.798659ms","start":"2025-12-19T03:54:45.746269Z","end":"2025-12-19T03:54:46.108068Z","steps":["trace[1979747885] 'agreement among raft nodes before linearized reading'  (duration: 361.717001ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:46.108101Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-19T03:54:45.746227Z","time spent":"361.868953ms","remote":"127.0.0.1:33780","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":5,"response size":31753,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2025-12-19T03:54:46.107967Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-19T03:54:45.487898Z","time spent":"620.007152ms","remote":"127.0.0.1:33874","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":687,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-osq6jgaiw7qwbygbc3dlqorewy\" mod_revision:829 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-osq6jgaiw7qwbygbc3dlqorewy\" value_size:614 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-osq6jgaiw7qwbygbc3dlqorewy\" > >"}
	{"level":"info","ts":"2025-12-19T04:03:30.591667Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1034}
	{"level":"info","ts":"2025-12-19T04:03:30.668548Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1034,"took":"76.073147ms","hash":3314705234}
	{"level":"info","ts":"2025-12-19T04:03:30.66859Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3314705234,"revision":1034,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T04:08:30.598329Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1278}
	{"level":"info","ts":"2025-12-19T04:08:30.600527Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1278,"took":"1.879204ms","hash":3312816245}
	{"level":"info","ts":"2025-12-19T04:08:30.600576Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3312816245,"revision":1278,"compact-revision":1034}
	{"level":"info","ts":"2025-12-19T04:09:54.024908Z","caller":"traceutil/trace.go:171","msg":"trace[364313374] transaction","detail":"{read_only:false; response_revision:1591; number_of_response:1; }","duration":"113.278632ms","start":"2025-12-19T04:09:53.911532Z","end":"2025-12-19T04:09:54.024811Z","steps":["trace[364313374] 'process raft request'  (duration: 113.055658ms)"],"step_count":1}
	
	
	==> kernel <==
	 04:11:59 up 18 min,  0 users,  load average: 0.32, 0.25, 0.20
	Linux old-k8s-version-094166 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0d28987fec36e57788da0534ff938f4677376fcea5c129e696240ae6fc0ed015] <==
	E1219 04:08:32.996359       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1219 04:08:32.996368       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1219 04:08:32.996382       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 04:08:32.997531       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 04:09:31.847166       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.104.239:443: connect: connection refused
	I1219 04:09:31.847219       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 04:09:32.996695       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 04:09:32.996766       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1219 04:09:32.996774       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:09:32.998021       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 04:09:32.998106       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 04:09:32.998113       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 04:10:31.847408       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.104.239:443: connect: connection refused
	I1219 04:10:31.847447       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1219 04:11:31.847095       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.104.239:443: connect: connection refused
	I1219 04:11:31.847151       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 04:11:32.997308       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 04:11:32.997361       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1219 04:11:32.997369       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:11:32.998485       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 04:11:32.998550       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 04:11:32.998556       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [00fed501023e3502b030b4bb79ef632a05bce26dfa057b799045368d55f79e28] <==
	I1219 04:06:15.231650       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 04:06:44.581459       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:06:45.242703       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 04:07:14.587416       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:07:15.251494       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 04:07:44.594165       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:07:45.259663       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 04:08:14.599554       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:08:15.267252       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 04:08:44.605618       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:08:45.276886       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 04:09:14.611099       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:09:15.286622       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 04:09:44.618758       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:09:45.295464       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1219 04:09:59.199014       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="291.281µs"
	I1219 04:10:13.195428       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="117.547µs"
	E1219 04:10:14.624349       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:10:15.306680       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 04:10:44.631008       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:10:45.320873       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 04:11:14.636126       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:11:15.329595       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 04:11:44.642115       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 04:11:45.337263       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ba408cece6208ddeb9327287a76e8a3d4f03bcc10c4b233b10f9959864c891b5] <==
	I1219 03:53:32.731023       1 server_others.go:69] "Using iptables proxy"
	I1219 03:53:32.741933       1 node.go:141] Successfully retrieved node IP: 192.168.61.65
	I1219 03:53:32.784509       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1219 03:53:32.784528       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:53:32.787789       1 server_others.go:152] "Using iptables Proxier"
	I1219 03:53:32.787930       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1219 03:53:32.788159       1 server.go:846] "Version info" version="v1.28.0"
	I1219 03:53:32.788382       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:53:32.789313       1 config.go:188] "Starting service config controller"
	I1219 03:53:32.789389       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1219 03:53:32.789433       1 config.go:97] "Starting endpoint slice config controller"
	I1219 03:53:32.789457       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1219 03:53:32.791714       1 config.go:315] "Starting node config controller"
	I1219 03:53:32.791765       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1219 03:53:32.890302       1 shared_informer.go:318] Caches are synced for service config
	I1219 03:53:32.890814       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1219 03:53:32.892144       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [9f352655401bed294bcff033ea5d2aa2a0bdf87de5fbb1206e8731b680ce582c] <==
	I1219 03:53:29.765992       1 serving.go:348] Generated self-signed cert in-memory
	W1219 03:53:31.922660       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:53:31.922706       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:53:31.922721       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:53:31.922727       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:53:31.973790       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1219 03:53:31.973940       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:53:31.983544       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1219 03:53:31.984064       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:53:31.992629       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1219 03:53:31.984083       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1219 03:53:32.093025       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 19 04:09:26 old-k8s-version-094166 kubelet[1229]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 19 04:09:34 old-k8s-version-094166 kubelet[1229]: E1219 04:09:34.182990    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:09:48 old-k8s-version-094166 kubelet[1229]: E1219 04:09:48.190139    1229 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 04:09:48 old-k8s-version-094166 kubelet[1229]: E1219 04:09:48.190177    1229 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 04:09:48 old-k8s-version-094166 kubelet[1229]: E1219 04:09:48.190350    1229 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jrkrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePoli
cy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-9sqkf_kube-system(7aa70eac-5629-4a04-8a38-3701d9e33cda): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 19 04:09:48 old-k8s-version-094166 kubelet[1229]: E1219 04:09:48.190382    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:09:59 old-k8s-version-094166 kubelet[1229]: E1219 04:09:59.184332    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:10:13 old-k8s-version-094166 kubelet[1229]: E1219 04:10:13.183086    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:10:26 old-k8s-version-094166 kubelet[1229]: E1219 04:10:26.206083    1229 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 19 04:10:26 old-k8s-version-094166 kubelet[1229]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 19 04:10:26 old-k8s-version-094166 kubelet[1229]:         ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 19 04:10:26 old-k8s-version-094166 kubelet[1229]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 19 04:10:26 old-k8s-version-094166 kubelet[1229]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 19 04:10:27 old-k8s-version-094166 kubelet[1229]: E1219 04:10:27.183942    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:10:41 old-k8s-version-094166 kubelet[1229]: E1219 04:10:41.182771    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:10:52 old-k8s-version-094166 kubelet[1229]: E1219 04:10:52.183584    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:11:05 old-k8s-version-094166 kubelet[1229]: E1219 04:11:05.183089    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:11:21 old-k8s-version-094166 kubelet[1229]: E1219 04:11:21.183559    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:11:26 old-k8s-version-094166 kubelet[1229]: E1219 04:11:26.206600    1229 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 19 04:11:26 old-k8s-version-094166 kubelet[1229]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 19 04:11:26 old-k8s-version-094166 kubelet[1229]:         ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 19 04:11:26 old-k8s-version-094166 kubelet[1229]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 19 04:11:26 old-k8s-version-094166 kubelet[1229]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 19 04:11:34 old-k8s-version-094166 kubelet[1229]: E1219 04:11:34.185609    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	Dec 19 04:11:45 old-k8s-version-094166 kubelet[1229]: E1219 04:11:45.184604    1229 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9sqkf" podUID="7aa70eac-5629-4a04-8a38-3701d9e33cda"
	
	
	==> kubernetes-dashboard [278a539192a9e7c904fb20462e7177dd70e833a4bcfc50bfda225435b77cdac4] <==
	I1219 03:54:03.051217       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:54:03.051322       1 init.go:49] Using in-cluster config
	I1219 03:54:03.051767       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:54:03.051800       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:54:03.051939       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:54:03.051977       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:54:03.059810       1 main.go:119] "Successful initial request to the apiserver" version="v1.28.0"
	I1219 03:54:03.059967       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:54:03.070983       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	I1219 03:54:03.150776       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [633226fb3e30d41ed72df6ade31838a480d47853a5dd02fecf2e176fc3b823b6] <==
	10.244.0.1 - - [19/Dec/2025:04:09:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:09:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:09:33 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:04:09:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:09:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:09:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:10:03 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:04:10:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:10:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:10:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:10:33 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:04:10:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:10:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:10:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:11:03 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:04:11:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:11:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:11:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:11:33 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:04:11:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:11:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:04:11:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	E1219 04:09:56.502320       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 04:10:56.496030       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 04:11:56.496412       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	
	
	==> kubernetes-dashboard [6d1bca547a8cb13056ce6f41cd3d2827332c49678081ed4fdf1e5f07e548e8e9] <==
	I1219 03:53:53.296986       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:53:53.297089       1 init.go:48] Using in-cluster config
	I1219 03:53:53.297478       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [c52846cd4715f53aa4a5f474e43006e8c5483d90b1849df8dc0075dff1b6083c] <==
	I1219 03:53:59.504360       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:53:59.504522       1 init.go:49] Using in-cluster config
	I1219 03:53:59.504672       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [30d52d8be50dac733ff4ec86c05667c686a18577fa43f59488bb796d1a5f17cc] <==
	I1219 03:54:03.682828       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 03:54:03.697654       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 03:54:03.698060       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1219 03:54:21.116001       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 03:54:21.119286       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"151e0a76-60e8-47dd-a88b-79e45b0cb6e8", APIVersion:"v1", ResourceVersion:"806", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-094166_90084209-5341-4d92-95a3-fa64f6c8361b became leader
	I1219 03:54:21.119469       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-094166_90084209-5341-4d92-95a3-fa64f6c8361b!
	I1219 03:54:21.221102       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-094166_90084209-5341-4d92-95a3-fa64f6c8361b!
	
	
	==> storage-provisioner [9c23bdc763b01672e6cb7e934236bc23e89892665c4518cce1a92bfd3230cd50] <==
	I1219 03:53:32.699182       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:54:02.702442       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094166 -n old-k8s-version-094166
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-094166 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-57f55c9bc5-9sqkf
helpers_test.go:283: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context old-k8s-version-094166 describe pod metrics-server-57f55c9bc5-9sqkf
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context old-k8s-version-094166 describe pod metrics-server-57f55c9bc5-9sqkf: exit status 1 (66.218293ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9sqkf" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context old-k8s-version-094166 describe pod metrics-server-57f55c9bc5-9sqkf: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 04:09:18.512912    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:09:23.144022    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:09:24.605785    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-298059 -n no-preload-298059
start_stop_delete_test.go:285: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-12-19 04:18:05.613817297 +0000 UTC m=+6788.651996600
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-298059 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-298059 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (64.856968ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "dashboard-metrics-scraper" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-298059 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298059 -n no-preload-298059
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-298059 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-298059 logs -n 25: (1.170405316s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                       ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ stop    │ -p old-k8s-version-094166 --alsologtostderr -v=3                                                                                                                                                                                                 │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p no-preload-298059 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                          │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:51 UTC │
	│ stop    │ -p no-preload-298059 --alsologtostderr -v=3                                                                                                                                                                                                      │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:51 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p embed-certs-244717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:52 UTC │
	│ stop    │ -p embed-certs-244717 --alsologtostderr -v=3                                                                                                                                                                                                     │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-168174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                               │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:52 UTC │
	│ stop    │ -p default-k8s-diff-port-168174 --alsologtostderr -v=3                                                                                                                                                                                           │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-094166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p old-k8s-version-094166 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable dashboard -p no-preload-298059 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                     │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p no-preload-298059 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-244717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p embed-certs-244717 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-168174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                          │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ start   │ -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 04:01 UTC │
	│ image   │ old-k8s-version-094166 image list --format=json                                                                                                                                                                                                  │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ pause   │ -p old-k8s-version-094166 --alsologtostderr -v=1                                                                                                                                                                                                 │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ unpause │ -p old-k8s-version-094166 --alsologtostderr -v=1                                                                                                                                                                                                 │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ delete  │ -p old-k8s-version-094166                                                                                                                                                                                                                        │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ delete  │ -p old-k8s-version-094166                                                                                                                                                                                                                        │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ start   │ -p newest-cni-509532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-509532            │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ addons  │ enable metrics-server -p newest-cni-509532 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                          │ newest-cni-509532            │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ stop    │ -p newest-cni-509532 --alsologtostderr -v=3                                                                                                                                                                                                      │ newest-cni-509532            │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:14 UTC │
	│ addons  │ enable dashboard -p newest-cni-509532 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                     │ newest-cni-509532            │ jenkins │ v1.37.0 │ 19 Dec 25 04:14 UTC │ 19 Dec 25 04:14 UTC │
	│ start   │ -p newest-cni-509532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-509532            │ jenkins │ v1.37.0 │ 19 Dec 25 04:14 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 04:14:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 04:14:06.361038   61066 out.go:360] Setting OutFile to fd 1 ...
	I1219 04:14:06.361124   61066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 04:14:06.361131   61066 out.go:374] Setting ErrFile to fd 2...
	I1219 04:14:06.361135   61066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 04:14:06.361336   61066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 04:14:06.361764   61066 out.go:368] Setting JSON to false
	I1219 04:14:06.362626   61066 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6990,"bootTime":1766110656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 04:14:06.362675   61066 start.go:143] virtualization: kvm guest
	I1219 04:14:06.364211   61066 out.go:179] * [newest-cni-509532] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 04:14:06.365123   61066 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 04:14:06.365107   61066 notify.go:221] Checking for updates...
	I1219 04:14:06.366901   61066 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 04:14:06.367890   61066 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 04:14:06.368902   61066 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 04:14:06.369808   61066 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 04:14:06.370728   61066 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 04:14:06.372060   61066 config.go:182] Loaded profile config "newest-cni-509532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 04:14:06.372737   61066 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 04:14:06.412127   61066 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 04:14:06.413168   61066 start.go:309] selected driver: kvm2
	I1219 04:14:06.413184   61066 start.go:928] validating driver "kvm2" against &{Name:newest-cni-509532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:newest-cni-509532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] L
istenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 04:14:06.413290   61066 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 04:14:06.414194   61066 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 04:14:06.414228   61066 cni.go:84] Creating CNI manager for ""
	I1219 04:14:06.414281   61066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 04:14:06.414315   61066 start.go:353] cluster config:
	{Name:newest-cni-509532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-509532 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 04:14:06.414395   61066 iso.go:125] acquiring lock: {Name:mk42290af04a74a4dddf27fa33aac85c9bdccfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 04:14:06.415469   61066 out.go:179] * Starting "newest-cni-509532" primary control-plane node in "newest-cni-509532" cluster
	I1219 04:14:06.416441   61066 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 04:14:06.416466   61066 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1219 04:14:06.416473   61066 cache.go:65] Caching tarball of preloaded images
	I1219 04:14:06.416548   61066 preload.go:238] Found /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 04:14:06.416559   61066 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1219 04:14:06.416671   61066 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/config.json ...
	I1219 04:14:06.416906   61066 start.go:360] acquireMachinesLock for newest-cni-509532: {Name:mk229398d29442d4d52885bbac963e5004bfbfba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 04:14:06.416959   61066 start.go:364] duration metric: took 28.485µs to acquireMachinesLock for "newest-cni-509532"
	I1219 04:14:06.416976   61066 start.go:96] Skipping create...Using existing machine configuration
	I1219 04:14:06.416986   61066 fix.go:54] fixHost starting: 
	I1219 04:14:06.418488   61066 fix.go:112] recreateIfNeeded on newest-cni-509532: state=Stopped err=<nil>
	W1219 04:14:06.418507   61066 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 04:14:06.419900   61066 out.go:252] * Restarting existing kvm2 VM for "newest-cni-509532" ...
	I1219 04:14:06.419939   61066 main.go:144] libmachine: starting domain...
	I1219 04:14:06.419951   61066 main.go:144] libmachine: ensuring networks are active...
	I1219 04:14:06.420639   61066 main.go:144] libmachine: Ensuring network default is active
	I1219 04:14:06.421075   61066 main.go:144] libmachine: Ensuring network mk-newest-cni-509532 is active
	I1219 04:14:06.421699   61066 main.go:144] libmachine: getting domain XML...
	I1219 04:14:06.423077   61066 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>newest-cni-509532</name>
	  <uuid>3bcc174c-f6d6-4825-be3a-2b994ab26c4e</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/newest-cni-509532.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a0:99:c3'/>
	      <source network='mk-newest-cni-509532'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:78:17:8e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 04:14:07.744358   61066 main.go:144] libmachine: waiting for domain to start...
	I1219 04:14:07.745789   61066 main.go:144] libmachine: domain is now running
	I1219 04:14:07.745805   61066 main.go:144] libmachine: waiting for IP...
	I1219 04:14:07.746502   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:07.747078   61066 main.go:144] libmachine: domain newest-cni-509532 has current primary IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:07.747096   61066 main.go:144] libmachine: found domain IP: 192.168.61.70
	I1219 04:14:07.747104   61066 main.go:144] libmachine: reserving static IP address...
	I1219 04:14:07.747490   61066 main.go:144] libmachine: found host DHCP lease matching {name: "newest-cni-509532", mac: "52:54:00:a0:99:c3", ip: "192.168.61.70"} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:12:20 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:07.747516   61066 main.go:144] libmachine: skip adding static IP to network mk-newest-cni-509532 - found existing host DHCP lease matching {name: "newest-cni-509532", mac: "52:54:00:a0:99:c3", ip: "192.168.61.70"}
	I1219 04:14:07.747523   61066 main.go:144] libmachine: reserved static IP address 192.168.61.70 for domain newest-cni-509532
	I1219 04:14:07.747527   61066 main.go:144] libmachine: waiting for SSH...
	I1219 04:14:07.747532   61066 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 04:14:07.749941   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:07.750247   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:12:20 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:07.750277   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:07.750441   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:07.750665   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:07.750676   61066 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 04:14:10.861828   61066 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.61.70:22: connect: no route to host
	I1219 04:14:16.942890   61066 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.61.70:22: connect: no route to host
	I1219 04:14:20.046083   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 04:14:20.050093   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.050503   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.050526   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.050762   61066 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/config.json ...
	I1219 04:14:20.050940   61066 machine.go:94] provisionDockerMachine start ...
	I1219 04:14:20.053514   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.054009   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.054062   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.054281   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:20.054610   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:20.054627   61066 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 04:14:20.159350   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 04:14:20.159379   61066 buildroot.go:166] provisioning hostname "newest-cni-509532"
	I1219 04:14:20.162396   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.162960   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.163001   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.163165   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:20.163399   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:20.163419   61066 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-509532 && echo "newest-cni-509532" | sudo tee /etc/hostname
	I1219 04:14:20.284167   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-509532
	
	I1219 04:14:20.287544   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.287971   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.287994   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.288136   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:20.288322   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:20.288338   61066 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-509532' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-509532/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-509532' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 04:14:20.401704   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 04:14:20.401728   61066 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5010/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5010/.minikube}
	I1219 04:14:20.401744   61066 buildroot.go:174] setting up certificates
	I1219 04:14:20.401752   61066 provision.go:84] configureAuth start
	I1219 04:14:20.404963   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.405393   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.405415   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.407804   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.408151   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.408185   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.408357   61066 provision.go:143] copyHostCerts
	I1219 04:14:20.408419   61066 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem, removing ...
	I1219 04:14:20.408444   61066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem
	I1219 04:14:20.408538   61066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem (1082 bytes)
	I1219 04:14:20.408706   61066 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem, removing ...
	I1219 04:14:20.408721   61066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem
	I1219 04:14:20.408775   61066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem (1123 bytes)
	I1219 04:14:20.408860   61066 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem, removing ...
	I1219 04:14:20.408877   61066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem
	I1219 04:14:20.408925   61066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem (1679 bytes)
	I1219 04:14:20.409014   61066 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem org=jenkins.newest-cni-509532 san=[127.0.0.1 192.168.61.70 localhost minikube newest-cni-509532]
	I1219 04:14:20.479369   61066 provision.go:177] copyRemoteCerts
	I1219 04:14:20.479428   61066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 04:14:20.481882   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.482182   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.482203   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.482321   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:20.566454   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 04:14:20.595753   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1219 04:14:20.622921   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 04:14:20.656657   61066 provision.go:87] duration metric: took 254.891587ms to configureAuth
	I1219 04:14:20.656688   61066 buildroot.go:189] setting minikube options for container-runtime
	I1219 04:14:20.656898   61066 config.go:182] Loaded profile config "newest-cni-509532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 04:14:20.659654   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.660055   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.660074   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.660268   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:20.660466   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:20.660480   61066 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 04:14:20.908219   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 04:14:20.908261   61066 machine.go:97] duration metric: took 857.295481ms to provisionDockerMachine
	I1219 04:14:20.908277   61066 start.go:293] postStartSetup for "newest-cni-509532" (driver="kvm2")
	I1219 04:14:20.908289   61066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 04:14:20.908347   61066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 04:14:20.911558   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.912049   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.912081   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.912214   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:20.995002   61066 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 04:14:20.999993   61066 info.go:137] Remote host: Buildroot 2025.02
	I1219 04:14:21.000015   61066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/addons for local assets ...
	I1219 04:14:21.000093   61066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/files for local assets ...
	I1219 04:14:21.000225   61066 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem -> 89372.pem in /etc/ssl/certs
	I1219 04:14:21.000345   61066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 04:14:21.011859   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /etc/ssl/certs/89372.pem (1708 bytes)
	I1219 04:14:21.044486   61066 start.go:296] duration metric: took 136.195131ms for postStartSetup
	I1219 04:14:21.044529   61066 fix.go:56] duration metric: took 14.62754292s for fixHost
	I1219 04:14:21.047285   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.047669   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:21.047697   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.047883   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:21.048095   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:21.048112   61066 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 04:14:21.154669   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766117661.109413285
	
	I1219 04:14:21.154689   61066 fix.go:216] guest clock: 1766117661.109413285
	I1219 04:14:21.154697   61066 fix.go:229] Guest: 2025-12-19 04:14:21.109413285 +0000 UTC Remote: 2025-12-19 04:14:21.04453285 +0000 UTC m=+14.732482606 (delta=64.880435ms)
	I1219 04:14:21.154716   61066 fix.go:200] guest clock delta is within tolerance: 64.880435ms
	I1219 04:14:21.154729   61066 start.go:83] releasing machines lock for "newest-cni-509532", held for 14.737760627s
	I1219 04:14:21.157999   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.158406   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:21.158446   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.159177   61066 ssh_runner.go:195] Run: cat /version.json
	I1219 04:14:21.159277   61066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 04:14:21.162317   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.162712   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.162824   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:21.162859   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.163054   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:21.163287   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:21.163320   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.163501   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:21.240694   61066 ssh_runner.go:195] Run: systemctl --version
	I1219 04:14:21.273827   61066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 04:14:21.422462   61066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 04:14:21.428763   61066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 04:14:21.428831   61066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 04:14:21.447713   61066 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 04:14:21.447735   61066 start.go:496] detecting cgroup driver to use...
	I1219 04:14:21.447788   61066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 04:14:21.467586   61066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 04:14:21.484309   61066 docker.go:218] disabling cri-docker service (if available) ...
	I1219 04:14:21.484377   61066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 04:14:21.500884   61066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 04:14:21.516934   61066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 04:14:21.663592   61066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 04:14:21.886439   61066 docker.go:234] disabling docker service ...
	I1219 04:14:21.886499   61066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 04:14:21.902373   61066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 04:14:21.916305   61066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 04:14:22.098945   61066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 04:14:22.243790   61066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 04:14:22.258649   61066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 04:14:22.280345   61066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 04:14:22.280436   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.293096   61066 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1219 04:14:22.293154   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.304967   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.317195   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.329451   61066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 04:14:22.342541   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.354632   61066 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.376253   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.388591   61066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 04:14:22.399129   61066 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 04:14:22.399179   61066 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 04:14:22.418823   61066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 04:14:22.431040   61066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 04:14:22.578462   61066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 04:14:22.691413   61066 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 04:14:22.691504   61066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 04:14:22.696926   61066 start.go:564] Will wait 60s for crictl version
	I1219 04:14:22.696992   61066 ssh_runner.go:195] Run: which crictl
	I1219 04:14:22.700936   61066 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 04:14:22.737311   61066 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1219 04:14:22.737400   61066 ssh_runner.go:195] Run: crio --version
	I1219 04:14:22.764722   61066 ssh_runner.go:195] Run: crio --version
	I1219 04:14:22.794640   61066 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.29.1 ...
	I1219 04:14:22.798427   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:22.798864   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:22.798888   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:22.799088   61066 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1219 04:14:22.803142   61066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 04:14:22.819541   61066 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1219 04:14:22.820459   61066 kubeadm.go:884] updating cluster {Name:newest-cni-509532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-rc.1 ClusterName:newest-cni-509532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: N
etwork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 04:14:22.820600   61066 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 04:14:22.820648   61066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 04:14:22.852144   61066 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1219 04:14:22.852235   61066 ssh_runner.go:195] Run: which lz4
	I1219 04:14:22.856631   61066 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1219 04:14:22.861114   61066 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1219 04:14:22.861147   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340598599 bytes)
	I1219 04:14:24.100811   61066 crio.go:462] duration metric: took 1.24424385s to copy over tarball
	I1219 04:14:24.100887   61066 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1219 04:14:25.642217   61066 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.541263209s)
	I1219 04:14:25.642254   61066 crio.go:469] duration metric: took 1.541416336s to extract the tarball
	I1219 04:14:25.642264   61066 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1219 04:14:25.680384   61066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 04:14:25.722028   61066 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 04:14:25.722055   61066 cache_images.go:86] Images are preloaded, skipping loading
	I1219 04:14:25.722063   61066 kubeadm.go:935] updating node { 192.168.61.70 8443 v1.35.0-rc.1 crio true true} ...
	I1219 04:14:25.722183   61066 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-509532 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-509532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 04:14:25.722277   61066 ssh_runner.go:195] Run: crio config
	I1219 04:14:25.769708   61066 cni.go:84] Creating CNI manager for ""
	I1219 04:14:25.769737   61066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 04:14:25.769764   61066 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1219 04:14:25.769793   61066 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.70 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-509532 NodeName:newest-cni-509532 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 04:14:25.769971   61066 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-509532"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.70"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.70"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 04:14:25.770093   61066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 04:14:25.783203   61066 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 04:14:25.783264   61066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 04:14:25.794507   61066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1219 04:14:25.813874   61066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 04:14:25.832656   61066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1219 04:14:25.851473   61066 ssh_runner.go:195] Run: grep 192.168.61.70	control-plane.minikube.internal$ /etc/hosts
	I1219 04:14:25.855283   61066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 04:14:25.868794   61066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 04:14:26.012641   61066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 04:14:26.033299   61066 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532 for IP: 192.168.61.70
	I1219 04:14:26.033319   61066 certs.go:195] generating shared ca certs ...
	I1219 04:14:26.033332   61066 certs.go:227] acquiring lock for ca certs: {Name:mk433b742ffd55233764aebe3c6f298e0d1f02ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 04:14:26.033472   61066 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key
	I1219 04:14:26.033510   61066 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key
	I1219 04:14:26.033526   61066 certs.go:257] generating profile certs ...
	I1219 04:14:26.033628   61066 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/client.key
	I1219 04:14:26.033688   61066 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/apiserver.key.91f2c6a6
	I1219 04:14:26.033722   61066 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/proxy-client.key
	I1219 04:14:26.033831   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem (1338 bytes)
	W1219 04:14:26.033863   61066 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937_empty.pem, impossibly tiny 0 bytes
	I1219 04:14:26.033872   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 04:14:26.033902   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem (1082 bytes)
	I1219 04:14:26.033928   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem (1123 bytes)
	I1219 04:14:26.033950   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem (1679 bytes)
	I1219 04:14:26.033991   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem (1708 bytes)
	I1219 04:14:26.034602   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 04:14:26.074451   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 04:14:26.106740   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 04:14:26.134229   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 04:14:26.161855   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 04:14:26.191298   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1219 04:14:26.220617   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 04:14:26.248595   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 04:14:26.277651   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem --> /usr/share/ca-certificates/8937.pem (1338 bytes)
	I1219 04:14:26.304192   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /usr/share/ca-certificates/89372.pem (1708 bytes)
	I1219 04:14:26.331489   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 04:14:26.359526   61066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 04:14:26.381074   61066 ssh_runner.go:195] Run: openssl version
	I1219 04:14:26.387536   61066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 04:14:26.398290   61066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 04:14:26.409385   61066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 04:14:26.414244   61066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 04:14:26.414281   61066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 04:14:26.421272   61066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 04:14:26.431473   61066 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 04:14:26.441908   61066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8937.pem
	I1219 04:14:26.453021   61066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8937.pem /etc/ssl/certs/8937.pem
	I1219 04:14:26.464301   61066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8937.pem
	I1219 04:14:26.469137   61066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:46 /usr/share/ca-certificates/8937.pem
	I1219 04:14:26.469186   61066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8937.pem
	I1219 04:14:26.475991   61066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 04:14:26.486849   61066 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8937.pem /etc/ssl/certs/51391683.0
	I1219 04:14:26.497751   61066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89372.pem
	I1219 04:14:26.509027   61066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89372.pem /etc/ssl/certs/89372.pem
	I1219 04:14:26.520212   61066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89372.pem
	I1219 04:14:26.525194   61066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:46 /usr/share/ca-certificates/89372.pem
	I1219 04:14:26.525249   61066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89372.pem
	I1219 04:14:26.532003   61066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 04:14:26.542354   61066 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89372.pem /etc/ssl/certs/3ec20f2e.0
	I1219 04:14:26.554029   61066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 04:14:26.558993   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 04:14:26.566192   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 04:14:26.572977   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 04:14:26.580715   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 04:14:26.587688   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 04:14:26.594505   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 04:14:26.601393   61066 kubeadm.go:401] StartCluster: {Name:newest-cni-509532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-rc.1 ClusterName:newest-cni-509532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 04:14:26.601491   61066 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 04:14:26.601531   61066 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 04:14:26.634723   61066 cri.go:92] found id: ""
	I1219 04:14:26.634795   61066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 04:14:26.646970   61066 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 04:14:26.646989   61066 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 04:14:26.647032   61066 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 04:14:26.657908   61066 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 04:14:26.659041   61066 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-509532" does not appear in /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 04:14:26.659677   61066 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-5010/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-509532" cluster setting kubeconfig missing "newest-cni-509532" context setting]
	I1219 04:14:26.660520   61066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 04:14:26.662741   61066 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 04:14:26.673645   61066 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.61.70
	I1219 04:14:26.673667   61066 kubeadm.go:1161] stopping kube-system containers ...
	I1219 04:14:26.673679   61066 cri.go:57] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1219 04:14:26.673730   61066 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 04:14:26.708336   61066 cri.go:92] found id: ""
	I1219 04:14:26.708403   61066 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1219 04:14:26.735368   61066 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 04:14:26.746710   61066 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 04:14:26.746732   61066 kubeadm.go:158] found existing configuration files:
	
	I1219 04:14:26.746773   61066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1219 04:14:26.756763   61066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 04:14:26.756825   61066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 04:14:26.767551   61066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1219 04:14:26.777603   61066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 04:14:26.777657   61066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 04:14:26.789616   61066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1219 04:14:26.799989   61066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 04:14:26.800043   61066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 04:14:26.811043   61066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1219 04:14:26.821685   61066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 04:14:26.821747   61066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 04:14:26.832490   61066 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 04:14:26.842910   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:26.899704   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:27.435741   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:27.687135   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:27.761434   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:27.848539   61066 api_server.go:52] waiting for apiserver process to appear ...
	I1219 04:14:27.848670   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:28.348883   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:28.848774   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:29.349337   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:29.848837   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:30.349757   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:30.384559   61066 api_server.go:72] duration metric: took 2.536027777s to wait for apiserver process to appear ...
	I1219 04:14:30.384596   61066 api_server.go:88] waiting for apiserver healthz status ...
	I1219 04:14:30.384624   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:32.236209   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 04:14:32.236242   61066 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 04:14:32.236259   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:32.301458   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 04:14:32.301485   61066 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 04:14:32.384678   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:32.390384   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 04:14:32.390423   61066 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 04:14:32.884690   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:32.894173   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 04:14:32.894197   61066 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 04:14:33.384837   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:33.395731   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 04:14:33.395765   61066 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 04:14:33.885453   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:33.890388   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 200:
	ok
	I1219 04:14:33.898178   61066 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 04:14:33.898202   61066 api_server.go:131] duration metric: took 3.513597679s to wait for apiserver health ...
	I1219 04:14:33.898212   61066 cni.go:84] Creating CNI manager for ""
	I1219 04:14:33.898219   61066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 04:14:33.899474   61066 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 04:14:33.900488   61066 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 04:14:33.923233   61066 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 04:14:33.972262   61066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 04:14:33.984776   61066 system_pods.go:59] 8 kube-system pods found
	I1219 04:14:33.984823   61066 system_pods.go:61] "coredns-7d764666f9-wt5mn" [1e1844bc-e4c0-493b-bbdf-017660625fb4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 04:14:33.984834   61066 system_pods.go:61] "etcd-newest-cni-509532" [668ecd06-0928-483a-b393-bae23e1269b9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 04:14:33.984855   61066 system_pods.go:61] "kube-apiserver-newest-cni-509532" [3cc26981-eaaf-4a54-ac65-5e98371efb21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 04:14:33.984865   61066 system_pods.go:61] "kube-controller-manager-newest-cni-509532" [38fb14a4-787e-490a-9049-21bf6733543b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 04:14:33.984879   61066 system_pods.go:61] "kube-proxy-k5ptq" [b2d52f71-bf33-4869-a7f5-d33183a19cce] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 04:14:33.984892   61066 system_pods.go:61] "kube-scheduler-newest-cni-509532" [53f913da-bb8f-4193-901b-272a4b77217c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 04:14:33.984904   61066 system_pods.go:61] "metrics-server-5d785b57d4-7sqzf" [0af927e7-5a60-42a7-adc5-638b0ac652c5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 04:14:33.984916   61066 system_pods.go:61] "storage-provisioner" [2154f643-f3b5-486f-bfc4-7355248590cd] Running
	I1219 04:14:33.984933   61066 system_pods.go:74] duration metric: took 12.647245ms to wait for pod list to return data ...
	I1219 04:14:33.984945   61066 node_conditions.go:102] verifying NodePressure condition ...
	I1219 04:14:33.993929   61066 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 04:14:33.993953   61066 node_conditions.go:123] node cpu capacity is 2
	I1219 04:14:33.993966   61066 node_conditions.go:105] duration metric: took 9.012349ms to run NodePressure ...
	I1219 04:14:33.994028   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:34.291614   61066 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 04:14:34.313097   61066 ops.go:34] apiserver oom_adj: -16
	I1219 04:14:34.313126   61066 kubeadm.go:602] duration metric: took 7.666128862s to restartPrimaryControlPlane
	I1219 04:14:34.313139   61066 kubeadm.go:403] duration metric: took 7.711753039s to StartCluster
	I1219 04:14:34.313159   61066 settings.go:142] acquiring lock: {Name:mkb131693a40b9aa50c302192272518c3561c861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 04:14:34.313257   61066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 04:14:34.315826   61066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 04:14:34.316151   61066 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 04:14:34.316217   61066 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 04:14:34.316324   61066 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-509532"
	I1219 04:14:34.316354   61066 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-509532"
	I1219 04:14:34.316351   61066 addons.go:70] Setting default-storageclass=true in profile "newest-cni-509532"
	W1219 04:14:34.316364   61066 addons.go:248] addon storage-provisioner should already be in state true
	I1219 04:14:34.316368   61066 addons.go:70] Setting metrics-server=true in profile "newest-cni-509532"
	I1219 04:14:34.316376   61066 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-509532"
	I1219 04:14:34.316382   61066 addons.go:239] Setting addon metrics-server=true in "newest-cni-509532"
	W1219 04:14:34.316391   61066 addons.go:248] addon metrics-server should already be in state true
	I1219 04:14:34.316412   61066 addons.go:70] Setting dashboard=true in profile "newest-cni-509532"
	I1219 04:14:34.316468   61066 addons.go:239] Setting addon dashboard=true in "newest-cni-509532"
	W1219 04:14:34.316477   61066 addons.go:248] addon dashboard should already be in state true
	I1219 04:14:34.316494   61066 host.go:66] Checking if "newest-cni-509532" exists ...
	I1219 04:14:34.316398   61066 host.go:66] Checking if "newest-cni-509532" exists ...
	I1219 04:14:34.316426   61066 host.go:66] Checking if "newest-cni-509532" exists ...
	I1219 04:14:34.316354   61066 config.go:182] Loaded profile config "newest-cni-509532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 04:14:34.318473   61066 out.go:179] * Verifying Kubernetes components...
	I1219 04:14:34.319389   61066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 04:14:34.320092   61066 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 04:14:34.320109   61066 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 04:14:34.320758   61066 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 04:14:34.321246   61066 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 04:14:34.321284   61066 addons.go:239] Setting addon default-storageclass=true in "newest-cni-509532"
	W1219 04:14:34.321510   61066 addons.go:248] addon default-storageclass should already be in state true
	I1219 04:14:34.321535   61066 host.go:66] Checking if "newest-cni-509532" exists ...
	I1219 04:14:34.321897   61066 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 04:14:34.321913   61066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 04:14:34.322387   61066 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 04:14:34.322403   61066 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 04:14:34.323725   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.323987   61066 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 04:14:34.324003   61066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 04:14:34.324827   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:34.324873   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.325140   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:34.326238   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.326275   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.326828   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:34.326860   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.326860   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:34.326949   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.327058   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:34.327242   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:34.327975   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.328324   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:34.328347   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.328469   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:34.587142   61066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 04:14:34.611731   61066 api_server.go:52] waiting for apiserver process to appear ...
	I1219 04:14:34.611822   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:34.634330   61066 api_server.go:72] duration metric: took 318.137827ms to wait for apiserver process to appear ...
	I1219 04:14:34.634361   61066 api_server.go:88] waiting for apiserver healthz status ...
	I1219 04:14:34.634385   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:34.640210   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 200:
	ok
	I1219 04:14:34.641463   61066 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 04:14:34.641480   61066 api_server.go:131] duration metric: took 7.111019ms to wait for apiserver health ...
	I1219 04:14:34.641487   61066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 04:14:34.644743   61066 system_pods.go:59] 8 kube-system pods found
	I1219 04:14:34.644776   61066 system_pods.go:61] "coredns-7d764666f9-wt5mn" [1e1844bc-e4c0-493b-bbdf-017660625fb4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 04:14:34.644789   61066 system_pods.go:61] "etcd-newest-cni-509532" [668ecd06-0928-483a-b393-bae23e1269b9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 04:14:34.644801   61066 system_pods.go:61] "kube-apiserver-newest-cni-509532" [3cc26981-eaaf-4a54-ac65-5e98371efb21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 04:14:34.644812   61066 system_pods.go:61] "kube-controller-manager-newest-cni-509532" [38fb14a4-787e-490a-9049-21bf6733543b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 04:14:34.644821   61066 system_pods.go:61] "kube-proxy-k5ptq" [b2d52f71-bf33-4869-a7f5-d33183a19cce] Running
	I1219 04:14:34.644837   61066 system_pods.go:61] "kube-scheduler-newest-cni-509532" [53f913da-bb8f-4193-901b-272a4b77217c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 04:14:34.644847   61066 system_pods.go:61] "metrics-server-5d785b57d4-7sqzf" [0af927e7-5a60-42a7-adc5-638b0ac652c5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 04:14:34.644859   61066 system_pods.go:61] "storage-provisioner" [2154f643-f3b5-486f-bfc4-7355248590cd] Running
	I1219 04:14:34.644867   61066 system_pods.go:74] duration metric: took 3.373739ms to wait for pod list to return data ...
	I1219 04:14:34.644878   61066 default_sa.go:34] waiting for default service account to be created ...
	I1219 04:14:34.647226   61066 default_sa.go:45] found service account: "default"
	I1219 04:14:34.647247   61066 default_sa.go:55] duration metric: took 2.35291ms for default service account to be created ...
	I1219 04:14:34.647260   61066 kubeadm.go:587] duration metric: took 331.072692ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 04:14:34.647286   61066 node_conditions.go:102] verifying NodePressure condition ...
	I1219 04:14:34.649136   61066 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 04:14:34.649158   61066 node_conditions.go:123] node cpu capacity is 2
	I1219 04:14:34.649171   61066 node_conditions.go:105] duration metric: took 1.875766ms to run NodePressure ...
	I1219 04:14:34.649184   61066 start.go:242] waiting for startup goroutines ...
	I1219 04:14:34.684661   61066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 04:14:34.690440   61066 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 04:14:34.690464   61066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 04:14:34.703173   61066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 04:14:34.737761   61066 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 04:14:34.737791   61066 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 04:14:34.790265   61066 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 04:14:34.790287   61066 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 04:14:34.852757   61066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 04:14:34.887897   61066 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 04:14:36.013051   61066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.328350909s)
	I1219 04:14:36.013133   61066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.309931889s)
	I1219 04:14:36.111178   61066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.25838056s)
	I1219 04:14:36.111204   61066 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.223278971s)
	I1219 04:14:36.111222   61066 addons.go:500] Verifying addon metrics-server=true in "newest-cni-509532"
	I1219 04:14:36.111276   61066 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 04:14:36.114770   61066 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 04:14:36.989143   61066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 04:14:40.311413   61066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.322216791s)
	I1219 04:14:40.311501   61066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 04:14:40.700323   61066 addons.go:500] Verifying addon dashboard=true in "newest-cni-509532"
	I1219 04:14:40.703308   61066 out.go:179] * Verifying dashboard addon...
	I1219 04:14:40.705388   61066 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 04:14:40.714051   61066 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 04:14:40.714067   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:41.214289   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:41.709381   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:42.208940   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:42.711074   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:43.209100   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:43.709687   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:44.209381   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:44.709033   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:45.208335   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:45.708776   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:46.209886   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:46.708530   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:47.209371   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:47.708645   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:48.209250   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:48.708911   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:49.208441   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:49.709372   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:50.209545   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:50.708944   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:51.208438   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:51.709022   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:52.208662   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:52.709170   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:53.209170   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:53.709621   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:54.209235   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:54.708902   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:55.208961   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:55.708819   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:56.209635   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:56.709369   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:57.208990   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:57.709114   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:58.209155   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:58.708556   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:59.208920   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:59.709099   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:00.208668   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:00.709308   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:01.208791   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:01.709282   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:02.208969   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:02.709020   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:03.209562   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:03.709818   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:04.209394   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:04.710095   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:05.208341   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:05.708877   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:06.209468   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:06.709021   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:07.208884   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:07.710798   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:08.209944   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:08.709151   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:09.209372   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:09.709439   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:10.210196   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:10.709268   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:11.209953   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:11.708633   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:12.209488   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:12.709557   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:13.209528   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:13.710269   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:14.208719   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:14.709683   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:15.209748   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:15.710466   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:16.209094   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:16.708900   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:17.210178   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:17.709320   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:18.208709   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:18.711788   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:19.209147   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:19.709274   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:20.215927   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:20.709487   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:21.209636   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:21.709453   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:22.209104   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:22.709403   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:23.209951   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:23.709366   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:24.208821   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:24.709494   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:25.209361   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:25.709820   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:26.210263   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:26.708770   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:27.209796   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:27.710441   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:28.210538   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:28.709362   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:29.208745   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:29.713247   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:30.209128   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:30.709079   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:31.209001   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:31.709304   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:32.208985   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:32.708946   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:33.208932   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:33.709461   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:34.211211   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:34.710234   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:35.209227   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:35.709023   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:36.208843   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:36.708561   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:37.209466   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:37.710118   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:38.210715   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:38.709625   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:39.209486   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:39.709309   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:40.209102   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:40.708785   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:41.209503   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:41.709006   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:42.210327   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:42.709654   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:43.209327   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:43.709108   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:44.210491   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:44.709518   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:45.209472   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:45.709105   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:46.209051   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:46.709227   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:47.209758   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:47.709152   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:48.208757   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:48.709591   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:49.208784   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:49.709224   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:50.209656   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:50.709222   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:51.208915   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:51.709281   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:52.209437   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:52.709067   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:53.209388   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:53.709821   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:54.210256   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:54.709004   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:55.210468   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:55.708503   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:56.210298   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:56.708960   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:57.209547   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:57.709509   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:58.209519   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:58.709279   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:59.209362   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:59.708363   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:00.209110   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:00.708846   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:01.209401   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:01.709242   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:02.209610   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:02.708360   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:03.209720   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:03.708485   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:04.208731   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:04.709494   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:05.208815   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:05.708950   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:06.211916   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:06.708827   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:07.209434   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:07.708859   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:08.209971   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:08.709487   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:09.208814   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:09.709339   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:10.209693   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:10.709073   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:11.208882   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:11.709587   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:12.216297   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:12.708620   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:13.209710   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:13.710293   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:14.209030   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:14.709846   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:15.209755   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:15.708775   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:16.209650   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:16.710182   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:17.208561   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:17.709020   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:18.209752   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:18.709934   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:19.208768   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:19.709685   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:20.211473   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:20.708882   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:21.209970   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:21.709072   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:22.209763   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:22.709161   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:23.209199   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:23.709476   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:24.209259   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:24.708905   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:25.210557   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:25.709447   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:26.209744   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:26.709864   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:27.209781   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:27.710207   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:28.209976   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:28.709670   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:29.209701   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:29.709229   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:30.209362   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:30.708762   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:31.209196   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:31.709242   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:32.210131   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:32.708822   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:33.209731   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:33.710255   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:34.209751   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:34.709687   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:35.209508   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:35.709380   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:36.209299   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:36.710415   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:37.208972   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:37.709755   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:38.210386   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:38.708945   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:39.209705   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:39.709625   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:40.209957   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:40.709140   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:41.209723   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:41.709186   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:42.209494   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:42.708817   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:43.208986   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:43.710319   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:44.209078   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:44.708386   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:45.209690   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:45.709034   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:46.208833   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:46.709451   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:47.209201   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:47.709554   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:48.209559   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:48.709724   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:49.209297   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:49.708616   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:50.209756   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:50.708769   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:51.209737   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:51.709288   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:52.210762   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:52.709462   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:53.208546   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:53.708920   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:54.209535   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:54.708776   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:55.209801   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:55.710359   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:56.209200   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:56.708922   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:57.209223   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:57.708773   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:58.210524   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:58.710068   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:59.209268   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:59.708786   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:00.209738   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:00.709423   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:01.208944   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:01.709388   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:02.210192   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:02.708971   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:03.209341   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:03.710005   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:04.210688   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:04.709305   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:05.209187   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:05.710432   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:06.209490   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:06.710363   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:07.208742   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:07.709481   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:08.210292   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:08.708605   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:09.208834   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:09.709251   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:10.208873   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:10.709907   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:11.209307   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:11.709408   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:12.209896   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:12.708621   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:13.209743   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:13.710231   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:14.208736   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:14.709905   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:15.209118   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:15.709174   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:16.209391   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:16.709719   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:17.209628   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:17.709695   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:18.209845   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:18.708781   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:19.209135   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:19.709507   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:20.208841   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:20.710990   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:21.208724   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:21.710158   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:22.209103   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:22.710129   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:23.209775   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:23.709987   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:24.209316   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:24.709881   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:25.208928   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:25.708559   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:26.210028   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:26.710617   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:27.209052   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:27.708805   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:28.208631   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:28.710309   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:29.209622   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:29.709948   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:30.208964   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:30.709728   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:31.209967   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:31.709109   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:32.209521   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:32.709659   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:33.210734   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:33.710628   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:34.209345   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:34.711684   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:35.208987   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:35.708456   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:36.210082   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:36.709231   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:37.209017   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:37.708542   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:38.209781   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:38.709533   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:39.209334   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:39.708705   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:40.209760   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:40.710565   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:41.209403   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:41.709166   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:42.209605   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:42.710227   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:43.209790   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:43.709155   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:44.209710   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:44.709316   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:45.209305   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:45.708751   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:46.210380   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:46.709861   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:47.209176   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:47.710298   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:48.209488   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:48.709793   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:49.209720   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:49.709597   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:50.210321   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:50.710068   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:51.209343   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:51.709456   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:52.209315   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:52.710321   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:53.208905   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:53.708513   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:54.209522   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:54.710225   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:55.208988   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:55.708532   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:56.210100   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:56.709278   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:57.209475   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:57.709488   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:58.208995   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:58.709129   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:59.208642   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:59.709554   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:00.208833   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:00.709059   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:01.208813   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	
	
	==> CRI-O <==
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.319578611Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117886319551674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134588,},InodesUsed:&UInt64Value{Value:58,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ae45c52-abd1-4180-b72d-746bd2f7bbf6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.320579168Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=804eacc5-7341-4adb-b2e3-a0cd07fbbef1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.320714319Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=804eacc5-7341-4adb-b2e3-a0cd07fbbef1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.321065006Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8734051d2f0753c6b39259d3a92400c83923737a12e641958274288c74f737af,PodSandboxId:38a19878c79e8ec4ee1fc22c21f80a89dfab724c1fd54f7467b39d39cd6d7bdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116464727222106,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a1b3f7-48eb-4e67-a38a-d3bbe618037b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5e14a321a32354d05717270a3820978b39b7fe7f272d3e3c60c08337ccfd9,PodSandboxId:060f5ba9ce5e95d49b5a8c5b58923f2516cffe5fac626ab8ffbce5972a9f4769,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-auth,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1,State:CONTAINER_RUNNING,CreatedAt:1766116459128067038,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-auth,io.kubernetes.pod.name: kubernetes-dashboard-auth-776b489b7d-9c8dt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 612deea8-222e-4076-8cf2-abed9ad430c4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d5e3053e,io.kubernetes.container.ports: [{\"name\":\"auth\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e932d1edea4abfa1cf9c9b4526d12d79651379c3515e143486d77e49eeed4013,PodSandboxId:b946b5e8ecf26f58b26959e213b4a54f8741433834484a1fa8df6f88ed4f5487,Metadata:&ContainerMetadata{Name:proxy,Attempt:0,},Image:&ImageSpec{Image:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_RUNNING,CreatedAt:1766116455666220781,Labels:map[string]string{io.kubernetes.container.name: proxy,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-rf7kh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid:
4e4979a6-a706-4379-ba0f-c676c9c6a4ff,},Annotations:map[string]string{io.kubernetes.container.hash: 6a12d770,io.kubernetes.container.ports: [{\"name\":\"proxy-tls\",\"containerPort\":8443,\"protocol\":\"TCP\"},{\"name\":\"status\",\"containerPort\":8100,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"kong\",\"quit\",\"--wait=15\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87834bd45e2f55618472578765bdaba0091e09366c3d3856f7be8c53adbfa311,PodSandboxId:b946b5e8ecf26f58b26959e213b4a54f8741433834484a1fa8df6f88ed4f5487,Metadata:&ContainerMetadata{Name:clear-stale-pid,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909de
c92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_EXITED,CreatedAt:1766116454729152813,Labels:map[string]string{io.kubernetes.container.name: clear-stale-pid,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-rf7kh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4e4979a6-a706-4379-ba0f-c676c9c6a4ff,},Annotations:map[string]string{io.kubernetes.container.hash: be552228,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7279a41bb4eb25f161d2cf59ef5fbe88c781b37a1adb7cd8a4fcacf3d987126,PodSandboxId:5de93babad08b2c4439a2a2fe38807e1166eaab072c41cb06446cb0a4da7629e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116441044355015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d5b4f64-027d-4358-ad35-d6f5cf456210,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e938ed63b36432780e9bda7e4e07175ff2c09116d5c4725046ebe10668ca727f,PodSandboxId:50e1884314e011cbff1646e7c6c28f6a55fbfa203ce35c71a9567cfb9108bae0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9
122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766116437677163723,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-s7729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c378beb2-94a6-4f48-ba17-5753dd076754,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d62270f19610e36a2dd04fae3c125abdfc1dc4e1c9bcbbb0a6a0
1f8b266853,PodSandboxId:9fbf14ecbca67f75a0e65fe75557249fcfb6d01bde3883512cc7cddd6b11c15d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766116433784381128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdfxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fed76d85-6daa-4df3-be09-4e1bbd4df590,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128355a3fc0dfc8009b429e4b715eec612771201a2c8acbd9a1d0b4f3c42f4ec,PodSandboxId:38a19878c
79e8ec4ee1fc22c21f80a89dfab724c1fd54f7467b39d39cd6d7bdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116433813622034,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a1b3f7-48eb-4e67-a38a-d3bbe618037b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34dccc447cd0495f72344c9747492811bad3cac08571f5ac5007ab21829ad24,PodSandboxId:e96045d4e198f9ec8d3436
a8c528095444f91681e7a5883ecbaaef3e20e76484,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766116430388219293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2499cee63550b4d080ca800fbd48c085,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Con
tainer{Id:b51c72efa2e6b21453d50ea9f3ceff413d2c418492349ac14070ae143117c31d,PodSandboxId:b2319b12b46c44269d5d1ea6bf004c84cd9af7bde9558476debb783c8fa12b2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766116430334824312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc0917cc026e406690dfc10d9d69272,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a2e1b518b3668336e07cdc6313b22edced2b5e71bd2bc8ba608cba56cb22b9,PodSandboxId:725ddd7dbbfad94b7f03aa8c2af02f77f8bfc2f5056cccf1aa1eaf4843012f90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766116430268168535,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 004aeae4dee57f603ea153e6cf9b1d25,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b745ee728165c7698ece1a961e1c00b51f262eb8ceb5b937040ef09debc9b4b,PodSandboxId:53091c4b851e66b844ff4b3207e698a142b5777cdd2c82fece770c99f9bbec41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766116430223814162,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa215cbe4eb125782b57f5aacf122c1,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.con
tainer.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=804eacc5-7341-4adb-b2e3-a0cd07fbbef1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.359261549Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f69719a-f823-47a4-9f03-f7c13bb6d855 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.359552699Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f69719a-f823-47a4-9f03-f7c13bb6d855 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.361417365Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73a29519-d56a-4a1c-aa1b-c88dcbd2dbf0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.363710457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117886363648560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134588,},InodesUsed:&UInt64Value{Value:58,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73a29519-d56a-4a1c-aa1b-c88dcbd2dbf0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.365350736Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33ed8940-e5d6-42d0-896f-0b7093c06df4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.365536418Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33ed8940-e5d6-42d0-896f-0b7093c06df4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.365893891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8734051d2f0753c6b39259d3a92400c83923737a12e641958274288c74f737af,PodSandboxId:38a19878c79e8ec4ee1fc22c21f80a89dfab724c1fd54f7467b39d39cd6d7bdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116464727222106,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a1b3f7-48eb-4e67-a38a-d3bbe618037b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5e14a321a32354d05717270a3820978b39b7fe7f272d3e3c60c08337ccfd9,PodSandboxId:060f5ba9ce5e95d49b5a8c5b58923f2516cffe5fac626ab8ffbce5972a9f4769,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-auth,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1,State:CONTAINER_RUNNING,CreatedAt:1766116459128067038,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-auth,io.kubernetes.pod.name: kubernetes-dashboard-auth-776b489b7d-9c8dt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 612deea8-222e-4076-8cf2-abed9ad430c4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d5e3053e,io.kubernetes.container.ports: [{\"name\":\"auth\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e932d1edea4abfa1cf9c9b4526d12d79651379c3515e143486d77e49eeed4013,PodSandboxId:b946b5e8ecf26f58b26959e213b4a54f8741433834484a1fa8df6f88ed4f5487,Metadata:&ContainerMetadata{Name:proxy,Attempt:0,},Image:&ImageSpec{Image:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_RUNNING,CreatedAt:1766116455666220781,Labels:map[string]string{io.kubernetes.container.name: proxy,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-rf7kh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid:
4e4979a6-a706-4379-ba0f-c676c9c6a4ff,},Annotations:map[string]string{io.kubernetes.container.hash: 6a12d770,io.kubernetes.container.ports: [{\"name\":\"proxy-tls\",\"containerPort\":8443,\"protocol\":\"TCP\"},{\"name\":\"status\",\"containerPort\":8100,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"kong\",\"quit\",\"--wait=15\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87834bd45e2f55618472578765bdaba0091e09366c3d3856f7be8c53adbfa311,PodSandboxId:b946b5e8ecf26f58b26959e213b4a54f8741433834484a1fa8df6f88ed4f5487,Metadata:&ContainerMetadata{Name:clear-stale-pid,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909de
c92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_EXITED,CreatedAt:1766116454729152813,Labels:map[string]string{io.kubernetes.container.name: clear-stale-pid,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-rf7kh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4e4979a6-a706-4379-ba0f-c676c9c6a4ff,},Annotations:map[string]string{io.kubernetes.container.hash: be552228,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7279a41bb4eb25f161d2cf59ef5fbe88c781b37a1adb7cd8a4fcacf3d987126,PodSandboxId:5de93babad08b2c4439a2a2fe38807e1166eaab072c41cb06446cb0a4da7629e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116441044355015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d5b4f64-027d-4358-ad35-d6f5cf456210,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e938ed63b36432780e9bda7e4e07175ff2c09116d5c4725046ebe10668ca727f,PodSandboxId:50e1884314e011cbff1646e7c6c28f6a55fbfa203ce35c71a9567cfb9108bae0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9
122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766116437677163723,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-s7729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c378beb2-94a6-4f48-ba17-5753dd076754,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d62270f19610e36a2dd04fae3c125abdfc1dc4e1c9bcbbb0a6a0
1f8b266853,PodSandboxId:9fbf14ecbca67f75a0e65fe75557249fcfb6d01bde3883512cc7cddd6b11c15d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766116433784381128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdfxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fed76d85-6daa-4df3-be09-4e1bbd4df590,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128355a3fc0dfc8009b429e4b715eec612771201a2c8acbd9a1d0b4f3c42f4ec,PodSandboxId:38a19878c
79e8ec4ee1fc22c21f80a89dfab724c1fd54f7467b39d39cd6d7bdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116433813622034,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a1b3f7-48eb-4e67-a38a-d3bbe618037b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34dccc447cd0495f72344c9747492811bad3cac08571f5ac5007ab21829ad24,PodSandboxId:e96045d4e198f9ec8d3436
a8c528095444f91681e7a5883ecbaaef3e20e76484,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766116430388219293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2499cee63550b4d080ca800fbd48c085,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Con
tainer{Id:b51c72efa2e6b21453d50ea9f3ceff413d2c418492349ac14070ae143117c31d,PodSandboxId:b2319b12b46c44269d5d1ea6bf004c84cd9af7bde9558476debb783c8fa12b2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766116430334824312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc0917cc026e406690dfc10d9d69272,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a2e1b518b3668336e07cdc6313b22edced2b5e71bd2bc8ba608cba56cb22b9,PodSandboxId:725ddd7dbbfad94b7f03aa8c2af02f77f8bfc2f5056cccf1aa1eaf4843012f90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766116430268168535,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 004aeae4dee57f603ea153e6cf9b1d25,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b745ee728165c7698ece1a961e1c00b51f262eb8ceb5b937040ef09debc9b4b,PodSandboxId:53091c4b851e66b844ff4b3207e698a142b5777cdd2c82fece770c99f9bbec41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766116430223814162,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa215cbe4eb125782b57f5aacf122c1,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.con
tainer.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33ed8940-e5d6-42d0-896f-0b7093c06df4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.396605504Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb9f63cb-ffb9-45ba-95e8-8c102ffc9863 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.396971848Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb9f63cb-ffb9-45ba-95e8-8c102ffc9863 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.398204059Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17f22cf9-d663-4103-b302-da8d1405d1c5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.398588321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117886398568545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134588,},InodesUsed:&UInt64Value{Value:58,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17f22cf9-d663-4103-b302-da8d1405d1c5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.399585434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b3fc411-3ca7-4768-970b-1b30e650ec0d name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.399650119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b3fc411-3ca7-4768-970b-1b30e650ec0d name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.399951140Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8734051d2f0753c6b39259d3a92400c83923737a12e641958274288c74f737af,PodSandboxId:38a19878c79e8ec4ee1fc22c21f80a89dfab724c1fd54f7467b39d39cd6d7bdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116464727222106,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a1b3f7-48eb-4e67-a38a-d3bbe618037b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5e14a321a32354d05717270a3820978b39b7fe7f272d3e3c60c08337ccfd9,PodSandboxId:060f5ba9ce5e95d49b5a8c5b58923f2516cffe5fac626ab8ffbce5972a9f4769,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-auth,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1,State:CONTAINER_RUNNING,CreatedAt:1766116459128067038,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-auth,io.kubernetes.pod.name: kubernetes-dashboard-auth-776b489b7d-9c8dt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 612deea8-222e-4076-8cf2-abed9ad430c4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d5e3053e,io.kubernetes.container.ports: [{\"name\":\"auth\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e932d1edea4abfa1cf9c9b4526d12d79651379c3515e143486d77e49eeed4013,PodSandboxId:b946b5e8ecf26f58b26959e213b4a54f8741433834484a1fa8df6f88ed4f5487,Metadata:&ContainerMetadata{Name:proxy,Attempt:0,},Image:&ImageSpec{Image:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_RUNNING,CreatedAt:1766116455666220781,Labels:map[string]string{io.kubernetes.container.name: proxy,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-rf7kh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid:
4e4979a6-a706-4379-ba0f-c676c9c6a4ff,},Annotations:map[string]string{io.kubernetes.container.hash: 6a12d770,io.kubernetes.container.ports: [{\"name\":\"proxy-tls\",\"containerPort\":8443,\"protocol\":\"TCP\"},{\"name\":\"status\",\"containerPort\":8100,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"kong\",\"quit\",\"--wait=15\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87834bd45e2f55618472578765bdaba0091e09366c3d3856f7be8c53adbfa311,PodSandboxId:b946b5e8ecf26f58b26959e213b4a54f8741433834484a1fa8df6f88ed4f5487,Metadata:&ContainerMetadata{Name:clear-stale-pid,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909de
c92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_EXITED,CreatedAt:1766116454729152813,Labels:map[string]string{io.kubernetes.container.name: clear-stale-pid,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-rf7kh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4e4979a6-a706-4379-ba0f-c676c9c6a4ff,},Annotations:map[string]string{io.kubernetes.container.hash: be552228,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7279a41bb4eb25f161d2cf59ef5fbe88c781b37a1adb7cd8a4fcacf3d987126,PodSandboxId:5de93babad08b2c4439a2a2fe38807e1166eaab072c41cb06446cb0a4da7629e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116441044355015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d5b4f64-027d-4358-ad35-d6f5cf456210,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e938ed63b36432780e9bda7e4e07175ff2c09116d5c4725046ebe10668ca727f,PodSandboxId:50e1884314e011cbff1646e7c6c28f6a55fbfa203ce35c71a9567cfb9108bae0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9
122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766116437677163723,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-s7729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c378beb2-94a6-4f48-ba17-5753dd076754,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d62270f19610e36a2dd04fae3c125abdfc1dc4e1c9bcbbb0a6a0
1f8b266853,PodSandboxId:9fbf14ecbca67f75a0e65fe75557249fcfb6d01bde3883512cc7cddd6b11c15d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766116433784381128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdfxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fed76d85-6daa-4df3-be09-4e1bbd4df590,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128355a3fc0dfc8009b429e4b715eec612771201a2c8acbd9a1d0b4f3c42f4ec,PodSandboxId:38a19878c
79e8ec4ee1fc22c21f80a89dfab724c1fd54f7467b39d39cd6d7bdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116433813622034,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a1b3f7-48eb-4e67-a38a-d3bbe618037b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34dccc447cd0495f72344c9747492811bad3cac08571f5ac5007ab21829ad24,PodSandboxId:e96045d4e198f9ec8d3436
a8c528095444f91681e7a5883ecbaaef3e20e76484,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766116430388219293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2499cee63550b4d080ca800fbd48c085,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Con
tainer{Id:b51c72efa2e6b21453d50ea9f3ceff413d2c418492349ac14070ae143117c31d,PodSandboxId:b2319b12b46c44269d5d1ea6bf004c84cd9af7bde9558476debb783c8fa12b2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766116430334824312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc0917cc026e406690dfc10d9d69272,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a2e1b518b3668336e07cdc6313b22edced2b5e71bd2bc8ba608cba56cb22b9,PodSandboxId:725ddd7dbbfad94b7f03aa8c2af02f77f8bfc2f5056cccf1aa1eaf4843012f90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766116430268168535,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 004aeae4dee57f603ea153e6cf9b1d25,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b745ee728165c7698ece1a961e1c00b51f262eb8ceb5b937040ef09debc9b4b,PodSandboxId:53091c4b851e66b844ff4b3207e698a142b5777cdd2c82fece770c99f9bbec41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766116430223814162,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa215cbe4eb125782b57f5aacf122c1,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.con
tainer.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b3fc411-3ca7-4768-970b-1b30e650ec0d name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.439099713Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9a9ca75-8c42-4ae9-b5da-bbba3fd6aec2 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.439219244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9a9ca75-8c42-4ae9-b5da-bbba3fd6aec2 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.441419169Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad6b7910-7a39-474d-b901-cc204245bea5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.442584489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117886442558417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134588,},InodesUsed:&UInt64Value{Value:58,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad6b7910-7a39-474d-b901-cc204245bea5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.443723278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ad0c24a-51a5-4c5d-970c-6d2846372724 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.443877013Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ad0c24a-51a5-4c5d-970c-6d2846372724 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:06 no-preload-298059 crio[891]: time="2025-12-19 04:18:06.444744901Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8734051d2f0753c6b39259d3a92400c83923737a12e641958274288c74f737af,PodSandboxId:38a19878c79e8ec4ee1fc22c21f80a89dfab724c1fd54f7467b39d39cd6d7bdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116464727222106,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a1b3f7-48eb-4e67-a38a-d3bbe618037b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5e14a321a32354d05717270a3820978b39b7fe7f272d3e3c60c08337ccfd9,PodSandboxId:060f5ba9ce5e95d49b5a8c5b58923f2516cffe5fac626ab8ffbce5972a9f4769,Metadata:&ContainerMetadata{Name:kubernetes-dashboard-auth,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1,State:CONTAINER_RUNNING,CreatedAt:1766116459128067038,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard-auth,io.kubernetes.pod.name: kubernetes-dashboard-auth-776b489b7d-9c8dt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 612deea8-222e-4076-8cf2-abed9ad430c4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d5e3053e,io.kubernetes.container.ports: [{\"name\":\"auth\",\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e932d1edea4abfa1cf9c9b4526d12d79651379c3515e143486d77e49eeed4013,PodSandboxId:b946b5e8ecf26f58b26959e213b4a54f8741433834484a1fa8df6f88ed4f5487,Metadata:&ContainerMetadata{Name:proxy,Attempt:0,},Image:&ImageSpec{Image:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_RUNNING,CreatedAt:1766116455666220781,Labels:map[string]string{io.kubernetes.container.name: proxy,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-rf7kh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid:
4e4979a6-a706-4379-ba0f-c676c9c6a4ff,},Annotations:map[string]string{io.kubernetes.container.hash: 6a12d770,io.kubernetes.container.ports: [{\"name\":\"proxy-tls\",\"containerPort\":8443,\"protocol\":\"TCP\"},{\"name\":\"status\",\"containerPort\":8100,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"kong\",\"quit\",\"--wait=15\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87834bd45e2f55618472578765bdaba0091e09366c3d3856f7be8c53adbfa311,PodSandboxId:b946b5e8ecf26f58b26959e213b4a54f8741433834484a1fa8df6f88ed4f5487,Metadata:&ContainerMetadata{Name:clear-stale-pid,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a975970da2f5f3b909de
c92b1a5ddc5e9299baee1442fb1a6986a8a120d5480,State:CONTAINER_EXITED,CreatedAt:1766116454729152813,Labels:map[string]string{io.kubernetes.container.name: clear-stale-pid,io.kubernetes.pod.name: kubernetes-dashboard-kong-78b7499b45-rf7kh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4e4979a6-a706-4379-ba0f-c676c9c6a4ff,},Annotations:map[string]string{io.kubernetes.container.hash: be552228,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7279a41bb4eb25f161d2cf59ef5fbe88c781b37a1adb7cd8a4fcacf3d987126,PodSandboxId:5de93babad08b2c4439a2a2fe38807e1166eaab072c41cb06446cb0a4da7629e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116441044355015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d5b4f64-027d-4358-ad35-d6f5cf456210,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e938ed63b36432780e9bda7e4e07175ff2c09116d5c4725046ebe10668ca727f,PodSandboxId:50e1884314e011cbff1646e7c6c28f6a55fbfa203ce35c71a9567cfb9108bae0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9
122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766116437677163723,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-s7729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c378beb2-94a6-4f48-ba17-5753dd076754,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d62270f19610e36a2dd04fae3c125abdfc1dc4e1c9bcbbb0a6a0
1f8b266853,PodSandboxId:9fbf14ecbca67f75a0e65fe75557249fcfb6d01bde3883512cc7cddd6b11c15d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766116433784381128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdfxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fed76d85-6daa-4df3-be09-4e1bbd4df590,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128355a3fc0dfc8009b429e4b715eec612771201a2c8acbd9a1d0b4f3c42f4ec,PodSandboxId:38a19878c
79e8ec4ee1fc22c21f80a89dfab724c1fd54f7467b39d39cd6d7bdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116433813622034,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a1b3f7-48eb-4e67-a38a-d3bbe618037b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34dccc447cd0495f72344c9747492811bad3cac08571f5ac5007ab21829ad24,PodSandboxId:e96045d4e198f9ec8d3436
a8c528095444f91681e7a5883ecbaaef3e20e76484,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766116430388219293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2499cee63550b4d080ca800fbd48c085,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Con
tainer{Id:b51c72efa2e6b21453d50ea9f3ceff413d2c418492349ac14070ae143117c31d,PodSandboxId:b2319b12b46c44269d5d1ea6bf004c84cd9af7bde9558476debb783c8fa12b2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766116430334824312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc0917cc026e406690dfc10d9d69272,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a2e1b518b3668336e07cdc6313b22edced2b5e71bd2bc8ba608cba56cb22b9,PodSandboxId:725ddd7dbbfad94b7f03aa8c2af02f77f8bfc2f5056cccf1aa1eaf4843012f90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766116430268168535,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 004aeae4dee57f603ea153e6cf9b1d25,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b745ee728165c7698ece1a961e1c00b51f262eb8ceb5b937040ef09debc9b4b,PodSandboxId:53091c4b851e66b844ff4b3207e698a142b5777cdd2c82fece770c99f9bbec41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766116430223814162,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-298059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa215cbe4eb125782b57f5aacf122c1,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.con
tainer.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ad0c24a-51a5-4c5d-970c-6d2846372724 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                           CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8734051d2f075       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                23 minutes ago      Running             storage-provisioner         3                   38a19878c79e8       storage-provisioner                          kube-system
	43c5e14a321a3       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052   23 minutes ago      Running             kubernetes-dashboard-auth   0                   060f5ba9ce5e9       kubernetes-dashboard-auth-776b489b7d-9c8dt   kubernetes-dashboard
	e932d1edea4ab       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                23 minutes ago      Running             proxy                       0                   b946b5e8ecf26       kubernetes-dashboard-kong-78b7499b45-rf7kh   kubernetes-dashboard
	87834bd45e2f5       docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29                  23 minutes ago      Exited              clear-stale-pid             0                   b946b5e8ecf26       kubernetes-dashboard-kong-78b7499b45-rf7kh   kubernetes-dashboard
	f7279a41bb4eb       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e             24 minutes ago      Running             busybox                     1                   5de93babad08b       busybox                                      default
	e938ed63b3643       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                24 minutes ago      Running             coredns                     1                   50e1884314e01       coredns-7d764666f9-s7729                     kube-system
	128355a3fc0df       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                24 minutes ago      Exited              storage-provisioner         2                   38a19878c79e8       storage-provisioner                          kube-system
	43d62270f1961       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                                24 minutes ago      Running             kube-proxy                  1                   9fbf14ecbca67       kube-proxy-mdfxl                             kube-system
	a34dccc447cd0       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                                24 minutes ago      Running             kube-scheduler              1                   e96045d4e198f       kube-scheduler-no-preload-298059             kube-system
	b51c72efa2e6b       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                24 minutes ago      Running             etcd                        1                   b2319b12b46c4       etcd-no-preload-298059                       kube-system
	74a2e1b518b36       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                                24 minutes ago      Running             kube-apiserver              1                   725ddd7dbbfad       kube-apiserver-no-preload-298059             kube-system
	8b745ee728165       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                                24 minutes ago      Running             kube-controller-manager     1                   53091c4b851e6       kube-controller-manager-no-preload-298059    kube-system
	
	
	==> coredns [e938ed63b36432780e9bda7e4e07175ff2c09116d5c4725046ebe10668ca727f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47550 - 54271 "HINFO IN 3726524411623469454.6949907490803346276. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026035699s
	
	
	==> describe nodes <==
	Name:               no-preload-298059
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-298059
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=no-preload-298059
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_51_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:50:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-298059
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 04:18:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 04:13:17 +0000   Fri, 19 Dec 2025 03:50:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 04:13:17 +0000   Fri, 19 Dec 2025 03:50:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 04:13:17 +0000   Fri, 19 Dec 2025 03:50:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 04:13:17 +0000   Fri, 19 Dec 2025 03:54:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.137
	  Hostname:    no-preload-298059
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 2d818f99fc714bf3ba2eba438495ffd9
	  System UUID:                2d818f99-fc71-4bf3-ba2e-ba438495ffd9
	  Boot ID:                    25d4d3fe-f38b-40d9-8b85-a42971ad642c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 coredns-7d764666f9-s7729                                 100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     27m
	  kube-system                 etcd-no-preload-298059                                   100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         27m
	  kube-system                 kube-apiserver-no-preload-298059                         250m (12%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-no-preload-298059                200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-mdfxl                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-no-preload-298059                         100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 metrics-server-5d785b57d4-fkthx                          100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         26m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kubernetes-dashboard        kubernetes-dashboard-api-7646d845d9-scngx                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    24m
	  kubernetes-dashboard        kubernetes-dashboard-auth-776b489b7d-9c8dt               100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    24m
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-rf7kh               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq    100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    24m
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-pnj4g                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1250m (62%)   1 (50%)
	  memory             1170Mi (39%)  1770Mi (59%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27m   node-controller  Node no-preload-298059 event: Registered Node no-preload-298059 in Controller
	  Normal  RegisteredNode  24m   node-controller  Node no-preload-298059 event: Registered Node no-preload-298059 in Controller
	
	
	==> dmesg <==
	[Dec19 03:53] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001397] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005601] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.795242] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.107735] kauditd_printk_skb: 88 callbacks suppressed
	[  +4.663916] kauditd_printk_skb: 196 callbacks suppressed
	[Dec19 03:54] kauditd_printk_skb: 275 callbacks suppressed
	[ +11.265172] kauditd_printk_skb: 204 callbacks suppressed
	[  +9.990359] kauditd_printk_skb: 47 callbacks suppressed
	[Dec19 03:55] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [b51c72efa2e6b21453d50ea9f3ceff413d2c418492349ac14070ae143117c31d] <==
	{"level":"info","ts":"2025-12-19T04:08:51.208421Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1489,"took":"5.286414ms","hash":1861443370,"current-db-size-bytes":4591616,"current-db-size":"4.6 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-12-19T04:08:51.208468Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1861443370,"revision":1489,"compact-revision":1099}
	{"level":"warn","ts":"2025-12-19T04:12:29.509923Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.687959ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14779858727952720775 > lease_revoke:<id:4d1c9b34be142327>","response":"size:28"}
	{"level":"info","ts":"2025-12-19T04:12:29.510153Z","caller":"traceutil/trace.go:172","msg":"trace[2063979308] linearizableReadLoop","detail":"{readStateIndex:2246; appliedIndex:2245; }","duration":"131.457943ms","start":"2025-12-19T04:12:29.378654Z","end":"2025-12-19T04:12:29.510112Z","steps":["trace[2063979308] 'read index received'  (duration: 13.303µs)","trace[2063979308] 'applied index is now lower than readState.Index'  (duration: 131.443507ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T04:12:29.510242Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.334487ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-19T04:12:29.510242Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.253878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T04:12:29.510308Z","caller":"traceutil/trace.go:172","msg":"trace[1008162297] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1958; }","duration":"147.331738ms","start":"2025-12-19T04:12:29.362968Z","end":"2025-12-19T04:12:29.510299Z","steps":["trace[1008162297] 'agreement among raft nodes before linearized reading'  (duration: 147.216873ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T04:12:29.510308Z","caller":"traceutil/trace.go:172","msg":"trace[1842567046] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1958; }","duration":"130.39111ms","start":"2025-12-19T04:12:29.379902Z","end":"2025-12-19T04:12:29.510293Z","steps":["trace[1842567046] 'agreement among raft nodes before linearized reading'  (duration: 130.322556ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T04:12:30.287869Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"294.689737ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T04:12:30.287954Z","caller":"traceutil/trace.go:172","msg":"trace[2100543186] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1959; }","duration":"294.783122ms","start":"2025-12-19T04:12:29.993155Z","end":"2025-12-19T04:12:30.287938Z","steps":["trace[2100543186] 'range keys from in-memory index tree'  (duration: 294.596056ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T04:13:51.209014Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1770}
	{"level":"info","ts":"2025-12-19T04:13:51.215151Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1770,"took":"5.557765ms","hash":3913779246,"current-db-size-bytes":4591616,"current-db-size":"4.6 MB","current-db-size-in-use-bytes":2224128,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-12-19T04:13:51.215241Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3913779246,"revision":1770,"compact-revision":1489}
	{"level":"warn","ts":"2025-12-19T04:14:29.216867Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":14779858727952721590,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-12-19T04:14:29.355454Z","caller":"traceutil/trace.go:172","msg":"trace[1437558151] transaction","detail":"{read_only:false; response_revision:2062; number_of_response:1; }","duration":"852.136002ms","start":"2025-12-19T04:14:28.503290Z","end":"2025-12-19T04:14:29.355426Z","steps":["trace[1437558151] 'process raft request'  (duration: 851.920898ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T04:14:29.355635Z","caller":"traceutil/trace.go:172","msg":"trace[1656909449] linearizableReadLoop","detail":"{readStateIndex:2374; appliedIndex:2375; }","duration":"638.631521ms","start":"2025-12-19T04:14:28.716837Z","end":"2025-12-19T04:14:29.355469Z","steps":["trace[1656909449] 'read index received'  (duration: 638.622151ms)","trace[1656909449] 'applied index is now lower than readState.Index'  (duration: 7.896µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T04:14:29.355667Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T04:14:28.503271Z","time spent":"852.256282ms","remote":"127.0.0.1:53436","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2061 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-19T04:14:29.355866Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"639.110887ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T04:14:29.355898Z","caller":"traceutil/trace.go:172","msg":"trace[2042012394] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2062; }","duration":"639.142857ms","start":"2025-12-19T04:14:28.716747Z","end":"2025-12-19T04:14:29.355890Z","steps":["trace[2042012394] 'agreement among raft nodes before linearized reading'  (duration: 639.094421ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T04:14:29.380993Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.107397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T04:14:29.381040Z","caller":"traceutil/trace.go:172","msg":"trace[356365775] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:2062; }","duration":"134.160942ms","start":"2025-12-19T04:14:29.246872Z","end":"2025-12-19T04:14:29.381033Z","steps":["trace[356365775] 'agreement among raft nodes before linearized reading'  (duration: 134.019149ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T04:14:29.381344Z","caller":"traceutil/trace.go:172","msg":"trace[1554255404] transaction","detail":"{read_only:false; response_revision:2064; number_of_response:1; }","duration":"473.138644ms","start":"2025-12-19T04:14:28.908198Z","end":"2025-12-19T04:14:29.381336Z","steps":["trace[1554255404] 'process raft request'  (duration: 473.104699ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T04:14:29.381423Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T04:14:28.908168Z","time spent":"473.214642ms","remote":"127.0.0.1:53612","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":557,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-298059\" mod_revision:2055 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-298059\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-298059\" > >"}
	{"level":"info","ts":"2025-12-19T04:14:29.381573Z","caller":"traceutil/trace.go:172","msg":"trace[742294707] transaction","detail":"{read_only:false; response_revision:2063; number_of_response:1; }","duration":"795.76033ms","start":"2025-12-19T04:14:28.585804Z","end":"2025-12-19T04:14:29.381564Z","steps":["trace[742294707] 'process raft request'  (duration: 795.440118ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T04:14:29.381644Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T04:14:28.585738Z","time spent":"795.868339ms","remote":"127.0.0.1:53612","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-tbik2djiqlhu5gwkkmxndrkfm4\" mod_revision:2053 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-tbik2djiqlhu5gwkkmxndrkfm4\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-tbik2djiqlhu5gwkkmxndrkfm4\" > >"}
	
	
	==> kernel <==
	 04:18:06 up 24 min,  0 users,  load average: 0.02, 0.23, 0.28
	Linux no-preload-298059 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [74a2e1b518b3668336e07cdc6313b22edced2b5e71bd2bc8ba608cba56cb22b9] <==
	E1219 04:13:53.586344       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 04:13:53.586356       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1219 04:13:53.586401       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 04:13:53.587509       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:14:53.586494       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:14:53.586602       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 04:14:53.586614       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:14:53.587860       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:14:53.587991       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 04:14:53.588005       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:16:53.587098       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:16:53.587163       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 04:16:53.587184       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:16:53.588217       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:16:53.588275       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 04:16:53.588283       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8b745ee728165c7698ece1a961e1c00b51f262eb8ceb5b937040ef09debc9b4b] <==
	I1219 04:11:57.726216       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:12:27.373562       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:12:27.735398       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:12:57.379414       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:12:57.744283       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:13:27.384274       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:13:27.752175       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:13:57.388605       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:13:57.760197       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:14:27.393476       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:14:27.770134       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:14:57.398867       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:14:57.778221       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:15:27.403317       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:15:27.786543       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:15:57.409429       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:15:57.797271       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:16:27.415620       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:16:27.807069       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:16:57.421329       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:16:57.817361       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:17:27.426391       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:17:27.828810       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:17:57.432427       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:17:57.837990       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [43d62270f19610e36a2dd04fae3c125abdfc1dc4e1c9bcbbb0a6a01f8b266853] <==
	I1219 03:53:54.182308       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:53:54.283281       1 shared_informer.go:377] "Caches are synced"
	I1219 03:53:54.283329       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.137"]
	E1219 03:53:54.283428       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:53:54.374256       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:53:54.374327       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:53:54.374350       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:53:54.393486       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:53:54.394107       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:53:54.394135       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:53:54.406028       1 config.go:200] "Starting service config controller"
	I1219 03:53:54.406555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:53:54.406726       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:53:54.407288       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:53:54.407323       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:53:54.407330       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:53:54.408429       1 config.go:309] "Starting node config controller"
	I1219 03:53:54.408441       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:53:54.408450       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:53:54.511565       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:53:54.513892       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:53:54.511881       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a34dccc447cd0495f72344c9747492811bad3cac08571f5ac5007ab21829ad24] <==
	I1219 03:53:50.978728       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:53:52.478810       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:53:52.479833       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:53:52.479852       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:53:52.479858       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:53:52.566597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 03:53:52.566650       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:53:52.591525       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:53:52.592098       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:53:52.594133       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:53:52.592118       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1219 03:53:52.627223       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1219 03:53:54.194990       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 19 04:17:28 no-preload-298059 kubelet[1788]: E1219 04:17:28.298661    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: reading manifest 1.7.0 in docker.io/kubernetesui/dashboard-web: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-pnj4g" podUID="903dae27-c404-4849-a890-b0b9347710fa"
	Dec 19 04:17:28 no-preload-298059 kubelet[1788]: E1219 04:17:28.300394    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq" podUID="af2a0a73-cbd2-4724-8d28-578fb9abddbe"
	Dec 19 04:17:29 no-preload-298059 kubelet[1788]: E1219 04:17:29.674294    1788 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117849673966005  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:134588}  inodes_used:{value:58}}"
	Dec 19 04:17:29 no-preload-298059 kubelet[1788]: E1219 04:17:29.674313    1788 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117849673966005  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:134588}  inodes_used:{value:58}}"
	Dec 19 04:17:30 no-preload-298059 kubelet[1788]: E1219 04:17:30.295993    1788 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-fkthx" containerName="metrics-server"
	Dec 19 04:17:30 no-preload-298059 kubelet[1788]: E1219 04:17:30.298385    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-fkthx" podUID="cd519bcc-8634-4a06-8174-bc1d8114f895"
	Dec 19 04:17:34 no-preload-298059 kubelet[1788]: E1219 04:17:34.299982    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: reading manifest 1.14.0 in docker.io/kubernetesui/dashboard-api: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-7646d845d9-scngx" podUID="4f806eec-0e2a-4b2c-8ab4-df0bc3208141"
	Dec 19 04:17:37 no-preload-298059 kubelet[1788]: E1219 04:17:37.297031    1788 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-rf7kh" containerName="proxy"
	Dec 19 04:17:39 no-preload-298059 kubelet[1788]: E1219 04:17:39.676718    1788 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117859676175966  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:134588}  inodes_used:{value:58}}"
	Dec 19 04:17:39 no-preload-298059 kubelet[1788]: E1219 04:17:39.677019    1788 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117859676175966  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:134588}  inodes_used:{value:58}}"
	Dec 19 04:17:41 no-preload-298059 kubelet[1788]: E1219 04:17:41.296141    1788 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 04:17:41 no-preload-298059 kubelet[1788]: E1219 04:17:41.299240    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq" podUID="af2a0a73-cbd2-4724-8d28-578fb9abddbe"
	Dec 19 04:17:44 no-preload-298059 kubelet[1788]: E1219 04:17:44.296120    1788 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-fkthx" containerName="metrics-server"
	Dec 19 04:17:44 no-preload-298059 kubelet[1788]: E1219 04:17:44.298678    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-fkthx" podUID="cd519bcc-8634-4a06-8174-bc1d8114f895"
	Dec 19 04:17:46 no-preload-298059 kubelet[1788]: E1219 04:17:46.299882    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: reading manifest 1.14.0 in docker.io/kubernetesui/dashboard-api: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-7646d845d9-scngx" podUID="4f806eec-0e2a-4b2c-8ab4-df0bc3208141"
	Dec 19 04:17:49 no-preload-298059 kubelet[1788]: E1219 04:17:49.678999    1788 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117869678517618  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:134588}  inodes_used:{value:58}}"
	Dec 19 04:17:49 no-preload-298059 kubelet[1788]: E1219 04:17:49.679117    1788 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117869678517618  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:134588}  inodes_used:{value:58}}"
	Dec 19 04:17:54 no-preload-298059 kubelet[1788]: E1219 04:17:54.295602    1788 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 04:17:54 no-preload-298059 kubelet[1788]: E1219 04:17:54.297278    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq" podUID="af2a0a73-cbd2-4724-8d28-578fb9abddbe"
	Dec 19 04:17:57 no-preload-298059 kubelet[1788]: E1219 04:17:57.296160    1788 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-fkthx" containerName="metrics-server"
	Dec 19 04:17:57 no-preload-298059 kubelet[1788]: E1219 04:17:57.298164    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-fkthx" podUID="cd519bcc-8634-4a06-8174-bc1d8114f895"
	Dec 19 04:17:59 no-preload-298059 kubelet[1788]: E1219 04:17:59.683282    1788 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117879682592685  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:134588}  inodes_used:{value:58}}"
	Dec 19 04:17:59 no-preload-298059 kubelet[1788]: E1219 04:17:59.683313    1788 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117879682592685  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:134588}  inodes_used:{value:58}}"
	Dec 19 04:18:00 no-preload-298059 kubelet[1788]: E1219 04:18:00.299357    1788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: reading manifest 1.14.0 in docker.io/kubernetesui/dashboard-api: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-7646d845d9-scngx" podUID="4f806eec-0e2a-4b2c-8ab4-df0bc3208141"
	Dec 19 04:18:01 no-preload-298059 kubelet[1788]: E1219 04:18:01.303504    1788 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-298059" containerName="kube-scheduler"
	
	
	==> kubernetes-dashboard [43c5e14a321a32354d05717270a3820978b39b7fe7f272d3e3c60c08337ccfd9] <==
	I1219 03:54:19.436094       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:54:19.436315       1 init.go:49] Using in-cluster config
	I1219 03:54:19.436572       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [128355a3fc0dfc8009b429e4b715eec612771201a2c8acbd9a1d0b4f3c42f4ec] <==
	I1219 03:53:54.031073       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:54:24.037738       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8734051d2f0753c6b39259d3a92400c83923737a12e641958274288c74f737af] <==
	W1219 04:17:42.313409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:17:44.317373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:17:44.322715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:17:46.326555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:17:46.330798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:17:48.334740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:17:48.341226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:17:50.344313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:17:50.351625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:17:52.354952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:17:52.360462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:17:54.366632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:17:54.371220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:17:56.375448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:17:56.381126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:17:58.384169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:17:58.391677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:00.395618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:00.400575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:02.404485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:02.409939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:04.414670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:04.420963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:06.426313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:06.432829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-298059 -n no-preload-298059
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-298059 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-fkthx kubernetes-dashboard-api-7646d845d9-scngx kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq kubernetes-dashboard-web-7f7574785f-pnj4g
helpers_test.go:283: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context no-preload-298059 describe pod metrics-server-5d785b57d4-fkthx kubernetes-dashboard-api-7646d845d9-scngx kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq kubernetes-dashboard-web-7f7574785f-pnj4g
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context no-preload-298059 describe pod metrics-server-5d785b57d4-fkthx kubernetes-dashboard-api-7646d845d9-scngx kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq kubernetes-dashboard-web-7f7574785f-pnj4g: exit status 1 (66.088987ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-fkthx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-api-7646d845d9-scngx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-web-7f7574785f-pnj4g" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context no-preload-298059 describe pod metrics-server-5d785b57d4-fkthx kubernetes-dashboard-api-7646d845d9-scngx kubernetes-dashboard-metrics-scraper-594bbfb84b-gkplq kubernetes-dashboard-web-7f7574785f-pnj4g: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (541.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 04:09:56.919116    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-244717 -n embed-certs-244717
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-12-19 04:18:34.416723291 +0000 UTC m=+6817.454902605
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-244717 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context embed-certs-244717 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (61.14146ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "dashboard-metrics-scraper" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-244717 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-244717 -n embed-certs-244717
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-244717 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-244717 logs -n 25: (1.073771591s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                       ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ stop    │ -p default-k8s-diff-port-168174 --alsologtostderr -v=3                                                                                                                                                                                           │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:52 UTC │ 19 Dec 25 03:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-094166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p old-k8s-version-094166 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ addons  │ enable dashboard -p no-preload-298059 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                     │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p no-preload-298059 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-244717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p embed-certs-244717 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-168174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                          │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ start   │ -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 04:01 UTC │
	│ image   │ old-k8s-version-094166 image list --format=json                                                                                                                                                                                                  │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ pause   │ -p old-k8s-version-094166 --alsologtostderr -v=1                                                                                                                                                                                                 │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ unpause │ -p old-k8s-version-094166 --alsologtostderr -v=1                                                                                                                                                                                                 │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ delete  │ -p old-k8s-version-094166                                                                                                                                                                                                                        │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ delete  │ -p old-k8s-version-094166                                                                                                                                                                                                                        │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ start   │ -p newest-cni-509532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-509532            │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ addons  │ enable metrics-server -p newest-cni-509532 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                          │ newest-cni-509532            │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ stop    │ -p newest-cni-509532 --alsologtostderr -v=3                                                                                                                                                                                                      │ newest-cni-509532            │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:14 UTC │
	│ addons  │ enable dashboard -p newest-cni-509532 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                     │ newest-cni-509532            │ jenkins │ v1.37.0 │ 19 Dec 25 04:14 UTC │ 19 Dec 25 04:14 UTC │
	│ start   │ -p newest-cni-509532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-509532            │ jenkins │ v1.37.0 │ 19 Dec 25 04:14 UTC │                     │
	│ image   │ no-preload-298059 image list --format=json                                                                                                                                                                                                       │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	│ pause   │ -p no-preload-298059 --alsologtostderr -v=1                                                                                                                                                                                                      │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	│ unpause │ -p no-preload-298059 --alsologtostderr -v=1                                                                                                                                                                                                      │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	│ delete  │ -p no-preload-298059                                                                                                                                                                                                                             │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	│ delete  │ -p no-preload-298059                                                                                                                                                                                                                             │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	│ delete  │ -p guest-783207                                                                                                                                                                                                                                  │ guest-783207                 │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 04:14:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 04:14:06.361038   61066 out.go:360] Setting OutFile to fd 1 ...
	I1219 04:14:06.361124   61066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 04:14:06.361131   61066 out.go:374] Setting ErrFile to fd 2...
	I1219 04:14:06.361135   61066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 04:14:06.361336   61066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 04:14:06.361764   61066 out.go:368] Setting JSON to false
	I1219 04:14:06.362626   61066 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6990,"bootTime":1766110656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 04:14:06.362675   61066 start.go:143] virtualization: kvm guest
	I1219 04:14:06.364211   61066 out.go:179] * [newest-cni-509532] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 04:14:06.365123   61066 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 04:14:06.365107   61066 notify.go:221] Checking for updates...
	I1219 04:14:06.366901   61066 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 04:14:06.367890   61066 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 04:14:06.368902   61066 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 04:14:06.369808   61066 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 04:14:06.370728   61066 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 04:14:06.372060   61066 config.go:182] Loaded profile config "newest-cni-509532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 04:14:06.372737   61066 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 04:14:06.412127   61066 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 04:14:06.413168   61066 start.go:309] selected driver: kvm2
	I1219 04:14:06.413184   61066 start.go:928] validating driver "kvm2" against &{Name:newest-cni-509532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:newest-cni-509532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] L
istenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 04:14:06.413290   61066 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 04:14:06.414194   61066 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 04:14:06.414228   61066 cni.go:84] Creating CNI manager for ""
	I1219 04:14:06.414281   61066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 04:14:06.414315   61066 start.go:353] cluster config:
	{Name:newest-cni-509532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-509532 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 04:14:06.414395   61066 iso.go:125] acquiring lock: {Name:mk42290af04a74a4dddf27fa33aac85c9bdccfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 04:14:06.415469   61066 out.go:179] * Starting "newest-cni-509532" primary control-plane node in "newest-cni-509532" cluster
	I1219 04:14:06.416441   61066 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 04:14:06.416466   61066 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1219 04:14:06.416473   61066 cache.go:65] Caching tarball of preloaded images
	I1219 04:14:06.416548   61066 preload.go:238] Found /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 04:14:06.416559   61066 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1219 04:14:06.416671   61066 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/config.json ...
	I1219 04:14:06.416906   61066 start.go:360] acquireMachinesLock for newest-cni-509532: {Name:mk229398d29442d4d52885bbac963e5004bfbfba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 04:14:06.416959   61066 start.go:364] duration metric: took 28.485µs to acquireMachinesLock for "newest-cni-509532"
	I1219 04:14:06.416976   61066 start.go:96] Skipping create...Using existing machine configuration
	I1219 04:14:06.416986   61066 fix.go:54] fixHost starting: 
	I1219 04:14:06.418488   61066 fix.go:112] recreateIfNeeded on newest-cni-509532: state=Stopped err=<nil>
	W1219 04:14:06.418507   61066 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 04:14:06.419900   61066 out.go:252] * Restarting existing kvm2 VM for "newest-cni-509532" ...
	I1219 04:14:06.419939   61066 main.go:144] libmachine: starting domain...
	I1219 04:14:06.419951   61066 main.go:144] libmachine: ensuring networks are active...
	I1219 04:14:06.420639   61066 main.go:144] libmachine: Ensuring network default is active
	I1219 04:14:06.421075   61066 main.go:144] libmachine: Ensuring network mk-newest-cni-509532 is active
	I1219 04:14:06.421699   61066 main.go:144] libmachine: getting domain XML...
	I1219 04:14:06.423077   61066 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>newest-cni-509532</name>
	  <uuid>3bcc174c-f6d6-4825-be3a-2b994ab26c4e</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/newest-cni-509532.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a0:99:c3'/>
	      <source network='mk-newest-cni-509532'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:78:17:8e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 04:14:07.744358   61066 main.go:144] libmachine: waiting for domain to start...
	I1219 04:14:07.745789   61066 main.go:144] libmachine: domain is now running
	I1219 04:14:07.745805   61066 main.go:144] libmachine: waiting for IP...
	I1219 04:14:07.746502   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:07.747078   61066 main.go:144] libmachine: domain newest-cni-509532 has current primary IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:07.747096   61066 main.go:144] libmachine: found domain IP: 192.168.61.70
	I1219 04:14:07.747104   61066 main.go:144] libmachine: reserving static IP address...
	I1219 04:14:07.747490   61066 main.go:144] libmachine: found host DHCP lease matching {name: "newest-cni-509532", mac: "52:54:00:a0:99:c3", ip: "192.168.61.70"} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:12:20 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:07.747516   61066 main.go:144] libmachine: skip adding static IP to network mk-newest-cni-509532 - found existing host DHCP lease matching {name: "newest-cni-509532", mac: "52:54:00:a0:99:c3", ip: "192.168.61.70"}
	I1219 04:14:07.747523   61066 main.go:144] libmachine: reserved static IP address 192.168.61.70 for domain newest-cni-509532
	I1219 04:14:07.747527   61066 main.go:144] libmachine: waiting for SSH...
	I1219 04:14:07.747532   61066 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 04:14:07.749941   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:07.750247   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:12:20 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:07.750277   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:07.750441   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:07.750665   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:07.750676   61066 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 04:14:10.861828   61066 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.61.70:22: connect: no route to host
	I1219 04:14:16.942890   61066 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.61.70:22: connect: no route to host
	I1219 04:14:20.046083   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 04:14:20.050093   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.050503   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.050526   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.050762   61066 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/config.json ...
	I1219 04:14:20.050940   61066 machine.go:94] provisionDockerMachine start ...
	I1219 04:14:20.053514   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.054009   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.054062   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.054281   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:20.054610   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:20.054627   61066 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 04:14:20.159350   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 04:14:20.159379   61066 buildroot.go:166] provisioning hostname "newest-cni-509532"
	I1219 04:14:20.162396   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.162960   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.163001   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.163165   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:20.163399   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:20.163419   61066 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-509532 && echo "newest-cni-509532" | sudo tee /etc/hostname
	I1219 04:14:20.284167   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-509532
	
	I1219 04:14:20.287544   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.287971   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.287994   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.288136   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:20.288322   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:20.288338   61066 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-509532' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-509532/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-509532' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 04:14:20.401704   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 04:14:20.401728   61066 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5010/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5010/.minikube}
	I1219 04:14:20.401744   61066 buildroot.go:174] setting up certificates
	I1219 04:14:20.401752   61066 provision.go:84] configureAuth start
	I1219 04:14:20.404963   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.405393   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.405415   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.407804   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.408151   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.408185   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.408357   61066 provision.go:143] copyHostCerts
	I1219 04:14:20.408419   61066 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem, removing ...
	I1219 04:14:20.408444   61066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem
	I1219 04:14:20.408538   61066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem (1082 bytes)
	I1219 04:14:20.408706   61066 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem, removing ...
	I1219 04:14:20.408721   61066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem
	I1219 04:14:20.408775   61066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem (1123 bytes)
	I1219 04:14:20.408860   61066 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem, removing ...
	I1219 04:14:20.408877   61066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem
	I1219 04:14:20.408925   61066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem (1679 bytes)
	I1219 04:14:20.409014   61066 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem org=jenkins.newest-cni-509532 san=[127.0.0.1 192.168.61.70 localhost minikube newest-cni-509532]
	I1219 04:14:20.479369   61066 provision.go:177] copyRemoteCerts
	I1219 04:14:20.479428   61066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 04:14:20.481882   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.482182   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.482203   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.482321   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:20.566454   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 04:14:20.595753   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1219 04:14:20.622921   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 04:14:20.656657   61066 provision.go:87] duration metric: took 254.891587ms to configureAuth
	I1219 04:14:20.656688   61066 buildroot.go:189] setting minikube options for container-runtime
	I1219 04:14:20.656898   61066 config.go:182] Loaded profile config "newest-cni-509532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 04:14:20.659654   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.660055   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.660074   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.660268   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:20.660466   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:20.660480   61066 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 04:14:20.908219   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 04:14:20.908261   61066 machine.go:97] duration metric: took 857.295481ms to provisionDockerMachine
	I1219 04:14:20.908277   61066 start.go:293] postStartSetup for "newest-cni-509532" (driver="kvm2")
	I1219 04:14:20.908289   61066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 04:14:20.908347   61066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 04:14:20.911558   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.912049   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.912081   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.912214   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:20.995002   61066 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 04:14:20.999993   61066 info.go:137] Remote host: Buildroot 2025.02
	I1219 04:14:21.000015   61066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/addons for local assets ...
	I1219 04:14:21.000093   61066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/files for local assets ...
	I1219 04:14:21.000225   61066 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem -> 89372.pem in /etc/ssl/certs
	I1219 04:14:21.000345   61066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 04:14:21.011859   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /etc/ssl/certs/89372.pem (1708 bytes)
	I1219 04:14:21.044486   61066 start.go:296] duration metric: took 136.195131ms for postStartSetup
	I1219 04:14:21.044529   61066 fix.go:56] duration metric: took 14.62754292s for fixHost
	I1219 04:14:21.047285   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.047669   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:21.047697   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.047883   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:21.048095   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:21.048112   61066 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 04:14:21.154669   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766117661.109413285
	
	I1219 04:14:21.154689   61066 fix.go:216] guest clock: 1766117661.109413285
	I1219 04:14:21.154697   61066 fix.go:229] Guest: 2025-12-19 04:14:21.109413285 +0000 UTC Remote: 2025-12-19 04:14:21.04453285 +0000 UTC m=+14.732482606 (delta=64.880435ms)
	I1219 04:14:21.154716   61066 fix.go:200] guest clock delta is within tolerance: 64.880435ms
	I1219 04:14:21.154729   61066 start.go:83] releasing machines lock for "newest-cni-509532", held for 14.737760627s
	I1219 04:14:21.157999   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.158406   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:21.158446   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.159177   61066 ssh_runner.go:195] Run: cat /version.json
	I1219 04:14:21.159277   61066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 04:14:21.162317   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.162712   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.162824   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:21.162859   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.163054   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:21.163287   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:21.163320   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.163501   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:21.240694   61066 ssh_runner.go:195] Run: systemctl --version
	I1219 04:14:21.273827   61066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 04:14:21.422462   61066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 04:14:21.428763   61066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 04:14:21.428831   61066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 04:14:21.447713   61066 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 04:14:21.447735   61066 start.go:496] detecting cgroup driver to use...
	I1219 04:14:21.447788   61066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 04:14:21.467586   61066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 04:14:21.484309   61066 docker.go:218] disabling cri-docker service (if available) ...
	I1219 04:14:21.484377   61066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 04:14:21.500884   61066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 04:14:21.516934   61066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 04:14:21.663592   61066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 04:14:21.886439   61066 docker.go:234] disabling docker service ...
	I1219 04:14:21.886499   61066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 04:14:21.902373   61066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 04:14:21.916305   61066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 04:14:22.098945   61066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 04:14:22.243790   61066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 04:14:22.258649   61066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 04:14:22.280345   61066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 04:14:22.280436   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.293096   61066 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1219 04:14:22.293154   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.304967   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.317195   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.329451   61066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 04:14:22.342541   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.354632   61066 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.376253   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.388591   61066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 04:14:22.399129   61066 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 04:14:22.399179   61066 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 04:14:22.418823   61066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 04:14:22.431040   61066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 04:14:22.578462   61066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 04:14:22.691413   61066 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 04:14:22.691504   61066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 04:14:22.696926   61066 start.go:564] Will wait 60s for crictl version
	I1219 04:14:22.696992   61066 ssh_runner.go:195] Run: which crictl
	I1219 04:14:22.700936   61066 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 04:14:22.737311   61066 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1219 04:14:22.737400   61066 ssh_runner.go:195] Run: crio --version
	I1219 04:14:22.764722   61066 ssh_runner.go:195] Run: crio --version
	I1219 04:14:22.794640   61066 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.29.1 ...
	I1219 04:14:22.798427   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:22.798864   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:22.798888   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:22.799088   61066 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1219 04:14:22.803142   61066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 04:14:22.819541   61066 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1219 04:14:22.820459   61066 kubeadm.go:884] updating cluster {Name:newest-cni-509532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-rc.1 ClusterName:newest-cni-509532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: N
etwork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 04:14:22.820600   61066 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 04:14:22.820648   61066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 04:14:22.852144   61066 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1219 04:14:22.852235   61066 ssh_runner.go:195] Run: which lz4
	I1219 04:14:22.856631   61066 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1219 04:14:22.861114   61066 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1219 04:14:22.861147   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340598599 bytes)
	I1219 04:14:24.100811   61066 crio.go:462] duration metric: took 1.24424385s to copy over tarball
	I1219 04:14:24.100887   61066 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1219 04:14:25.642217   61066 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.541263209s)
	I1219 04:14:25.642254   61066 crio.go:469] duration metric: took 1.541416336s to extract the tarball
	I1219 04:14:25.642264   61066 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1219 04:14:25.680384   61066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 04:14:25.722028   61066 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 04:14:25.722055   61066 cache_images.go:86] Images are preloaded, skipping loading
	I1219 04:14:25.722063   61066 kubeadm.go:935] updating node { 192.168.61.70 8443 v1.35.0-rc.1 crio true true} ...
	I1219 04:14:25.722183   61066 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-509532 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-509532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 04:14:25.722277   61066 ssh_runner.go:195] Run: crio config
	I1219 04:14:25.769708   61066 cni.go:84] Creating CNI manager for ""
	I1219 04:14:25.769737   61066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 04:14:25.769764   61066 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1219 04:14:25.769793   61066 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.70 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-509532 NodeName:newest-cni-509532 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 04:14:25.769971   61066 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-509532"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.70"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.70"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 04:14:25.770093   61066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 04:14:25.783203   61066 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 04:14:25.783264   61066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 04:14:25.794507   61066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1219 04:14:25.813874   61066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 04:14:25.832656   61066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1219 04:14:25.851473   61066 ssh_runner.go:195] Run: grep 192.168.61.70	control-plane.minikube.internal$ /etc/hosts
	I1219 04:14:25.855283   61066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 04:14:25.868794   61066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 04:14:26.012641   61066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 04:14:26.033299   61066 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532 for IP: 192.168.61.70
	I1219 04:14:26.033319   61066 certs.go:195] generating shared ca certs ...
	I1219 04:14:26.033332   61066 certs.go:227] acquiring lock for ca certs: {Name:mk433b742ffd55233764aebe3c6f298e0d1f02ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 04:14:26.033472   61066 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key
	I1219 04:14:26.033510   61066 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key
	I1219 04:14:26.033526   61066 certs.go:257] generating profile certs ...
	I1219 04:14:26.033628   61066 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/client.key
	I1219 04:14:26.033688   61066 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/apiserver.key.91f2c6a6
	I1219 04:14:26.033722   61066 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/proxy-client.key
	I1219 04:14:26.033831   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem (1338 bytes)
	W1219 04:14:26.033863   61066 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937_empty.pem, impossibly tiny 0 bytes
	I1219 04:14:26.033872   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 04:14:26.033902   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem (1082 bytes)
	I1219 04:14:26.033928   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem (1123 bytes)
	I1219 04:14:26.033950   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem (1679 bytes)
	I1219 04:14:26.033991   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem (1708 bytes)
	I1219 04:14:26.034602   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 04:14:26.074451   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 04:14:26.106740   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 04:14:26.134229   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 04:14:26.161855   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 04:14:26.191298   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1219 04:14:26.220617   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 04:14:26.248595   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 04:14:26.277651   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem --> /usr/share/ca-certificates/8937.pem (1338 bytes)
	I1219 04:14:26.304192   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /usr/share/ca-certificates/89372.pem (1708 bytes)
	I1219 04:14:26.331489   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 04:14:26.359526   61066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 04:14:26.381074   61066 ssh_runner.go:195] Run: openssl version
	I1219 04:14:26.387536   61066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 04:14:26.398290   61066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 04:14:26.409385   61066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 04:14:26.414244   61066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 04:14:26.414281   61066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 04:14:26.421272   61066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 04:14:26.431473   61066 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 04:14:26.441908   61066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8937.pem
	I1219 04:14:26.453021   61066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8937.pem /etc/ssl/certs/8937.pem
	I1219 04:14:26.464301   61066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8937.pem
	I1219 04:14:26.469137   61066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:46 /usr/share/ca-certificates/8937.pem
	I1219 04:14:26.469186   61066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8937.pem
	I1219 04:14:26.475991   61066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 04:14:26.486849   61066 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8937.pem /etc/ssl/certs/51391683.0
	I1219 04:14:26.497751   61066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89372.pem
	I1219 04:14:26.509027   61066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89372.pem /etc/ssl/certs/89372.pem
	I1219 04:14:26.520212   61066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89372.pem
	I1219 04:14:26.525194   61066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:46 /usr/share/ca-certificates/89372.pem
	I1219 04:14:26.525249   61066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89372.pem
	I1219 04:14:26.532003   61066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 04:14:26.542354   61066 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89372.pem /etc/ssl/certs/3ec20f2e.0
	I1219 04:14:26.554029   61066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 04:14:26.558993   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 04:14:26.566192   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 04:14:26.572977   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 04:14:26.580715   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 04:14:26.587688   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 04:14:26.594505   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 04:14:26.601393   61066 kubeadm.go:401] StartCluster: {Name:newest-cni-509532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-rc.1 ClusterName:newest-cni-509532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 04:14:26.601491   61066 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 04:14:26.601531   61066 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 04:14:26.634723   61066 cri.go:92] found id: ""
	I1219 04:14:26.634795   61066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 04:14:26.646970   61066 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 04:14:26.646989   61066 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 04:14:26.647032   61066 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 04:14:26.657908   61066 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 04:14:26.659041   61066 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-509532" does not appear in /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 04:14:26.659677   61066 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-5010/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-509532" cluster setting kubeconfig missing "newest-cni-509532" context setting]
	I1219 04:14:26.660520   61066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 04:14:26.662741   61066 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 04:14:26.673645   61066 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.61.70
	I1219 04:14:26.673667   61066 kubeadm.go:1161] stopping kube-system containers ...
	I1219 04:14:26.673679   61066 cri.go:57] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1219 04:14:26.673730   61066 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 04:14:26.708336   61066 cri.go:92] found id: ""
	I1219 04:14:26.708403   61066 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1219 04:14:26.735368   61066 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 04:14:26.746710   61066 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 04:14:26.746732   61066 kubeadm.go:158] found existing configuration files:
	
	I1219 04:14:26.746773   61066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1219 04:14:26.756763   61066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 04:14:26.756825   61066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 04:14:26.767551   61066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1219 04:14:26.777603   61066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 04:14:26.777657   61066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 04:14:26.789616   61066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1219 04:14:26.799989   61066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 04:14:26.800043   61066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 04:14:26.811043   61066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1219 04:14:26.821685   61066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 04:14:26.821747   61066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 04:14:26.832490   61066 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 04:14:26.842910   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:26.899704   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:27.435741   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:27.687135   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:27.761434   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:27.848539   61066 api_server.go:52] waiting for apiserver process to appear ...
	I1219 04:14:27.848670   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:28.348883   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:28.848774   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:29.349337   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:29.848837   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:30.349757   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:30.384559   61066 api_server.go:72] duration metric: took 2.536027777s to wait for apiserver process to appear ...
	I1219 04:14:30.384596   61066 api_server.go:88] waiting for apiserver healthz status ...
	I1219 04:14:30.384624   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:32.236209   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 04:14:32.236242   61066 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 04:14:32.236259   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:32.301458   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 04:14:32.301485   61066 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 04:14:32.384678   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:32.390384   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 04:14:32.390423   61066 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 04:14:32.884690   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:32.894173   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 04:14:32.894197   61066 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 04:14:33.384837   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:33.395731   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 04:14:33.395765   61066 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 04:14:33.885453   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:33.890388   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 200:
	ok
	I1219 04:14:33.898178   61066 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 04:14:33.898202   61066 api_server.go:131] duration metric: took 3.513597679s to wait for apiserver health ...
	I1219 04:14:33.898212   61066 cni.go:84] Creating CNI manager for ""
	I1219 04:14:33.898219   61066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 04:14:33.899474   61066 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 04:14:33.900488   61066 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 04:14:33.923233   61066 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 04:14:33.972262   61066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 04:14:33.984776   61066 system_pods.go:59] 8 kube-system pods found
	I1219 04:14:33.984823   61066 system_pods.go:61] "coredns-7d764666f9-wt5mn" [1e1844bc-e4c0-493b-bbdf-017660625fb4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 04:14:33.984834   61066 system_pods.go:61] "etcd-newest-cni-509532" [668ecd06-0928-483a-b393-bae23e1269b9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 04:14:33.984855   61066 system_pods.go:61] "kube-apiserver-newest-cni-509532" [3cc26981-eaaf-4a54-ac65-5e98371efb21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 04:14:33.984865   61066 system_pods.go:61] "kube-controller-manager-newest-cni-509532" [38fb14a4-787e-490a-9049-21bf6733543b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 04:14:33.984879   61066 system_pods.go:61] "kube-proxy-k5ptq" [b2d52f71-bf33-4869-a7f5-d33183a19cce] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 04:14:33.984892   61066 system_pods.go:61] "kube-scheduler-newest-cni-509532" [53f913da-bb8f-4193-901b-272a4b77217c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 04:14:33.984904   61066 system_pods.go:61] "metrics-server-5d785b57d4-7sqzf" [0af927e7-5a60-42a7-adc5-638b0ac652c5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 04:14:33.984916   61066 system_pods.go:61] "storage-provisioner" [2154f643-f3b5-486f-bfc4-7355248590cd] Running
	I1219 04:14:33.984933   61066 system_pods.go:74] duration metric: took 12.647245ms to wait for pod list to return data ...
	I1219 04:14:33.984945   61066 node_conditions.go:102] verifying NodePressure condition ...
	I1219 04:14:33.993929   61066 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 04:14:33.993953   61066 node_conditions.go:123] node cpu capacity is 2
	I1219 04:14:33.993966   61066 node_conditions.go:105] duration metric: took 9.012349ms to run NodePressure ...
	I1219 04:14:33.994028   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:34.291614   61066 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 04:14:34.313097   61066 ops.go:34] apiserver oom_adj: -16
	I1219 04:14:34.313126   61066 kubeadm.go:602] duration metric: took 7.666128862s to restartPrimaryControlPlane
	I1219 04:14:34.313139   61066 kubeadm.go:403] duration metric: took 7.711753039s to StartCluster
	I1219 04:14:34.313159   61066 settings.go:142] acquiring lock: {Name:mkb131693a40b9aa50c302192272518c3561c861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 04:14:34.313257   61066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 04:14:34.315826   61066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 04:14:34.316151   61066 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 04:14:34.316217   61066 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 04:14:34.316324   61066 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-509532"
	I1219 04:14:34.316354   61066 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-509532"
	I1219 04:14:34.316351   61066 addons.go:70] Setting default-storageclass=true in profile "newest-cni-509532"
	W1219 04:14:34.316364   61066 addons.go:248] addon storage-provisioner should already be in state true
	I1219 04:14:34.316368   61066 addons.go:70] Setting metrics-server=true in profile "newest-cni-509532"
	I1219 04:14:34.316376   61066 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-509532"
	I1219 04:14:34.316382   61066 addons.go:239] Setting addon metrics-server=true in "newest-cni-509532"
	W1219 04:14:34.316391   61066 addons.go:248] addon metrics-server should already be in state true
	I1219 04:14:34.316412   61066 addons.go:70] Setting dashboard=true in profile "newest-cni-509532"
	I1219 04:14:34.316468   61066 addons.go:239] Setting addon dashboard=true in "newest-cni-509532"
	W1219 04:14:34.316477   61066 addons.go:248] addon dashboard should already be in state true
	I1219 04:14:34.316494   61066 host.go:66] Checking if "newest-cni-509532" exists ...
	I1219 04:14:34.316398   61066 host.go:66] Checking if "newest-cni-509532" exists ...
	I1219 04:14:34.316426   61066 host.go:66] Checking if "newest-cni-509532" exists ...
	I1219 04:14:34.316354   61066 config.go:182] Loaded profile config "newest-cni-509532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 04:14:34.318473   61066 out.go:179] * Verifying Kubernetes components...
	I1219 04:14:34.319389   61066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 04:14:34.320092   61066 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 04:14:34.320109   61066 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 04:14:34.320758   61066 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 04:14:34.321246   61066 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 04:14:34.321284   61066 addons.go:239] Setting addon default-storageclass=true in "newest-cni-509532"
	W1219 04:14:34.321510   61066 addons.go:248] addon default-storageclass should already be in state true
	I1219 04:14:34.321535   61066 host.go:66] Checking if "newest-cni-509532" exists ...
	I1219 04:14:34.321897   61066 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 04:14:34.321913   61066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 04:14:34.322387   61066 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 04:14:34.322403   61066 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 04:14:34.323725   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.323987   61066 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 04:14:34.324003   61066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 04:14:34.324827   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:34.324873   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.325140   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:34.326238   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.326275   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.326828   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:34.326860   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.326860   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:34.326949   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.327058   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:34.327242   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:34.327975   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.328324   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:34.328347   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.328469   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:34.587142   61066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 04:14:34.611731   61066 api_server.go:52] waiting for apiserver process to appear ...
	I1219 04:14:34.611822   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:34.634330   61066 api_server.go:72] duration metric: took 318.137827ms to wait for apiserver process to appear ...
	I1219 04:14:34.634361   61066 api_server.go:88] waiting for apiserver healthz status ...
	I1219 04:14:34.634385   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:34.640210   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 200:
	ok
	I1219 04:14:34.641463   61066 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 04:14:34.641480   61066 api_server.go:131] duration metric: took 7.111019ms to wait for apiserver health ...
	I1219 04:14:34.641487   61066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 04:14:34.644743   61066 system_pods.go:59] 8 kube-system pods found
	I1219 04:14:34.644776   61066 system_pods.go:61] "coredns-7d764666f9-wt5mn" [1e1844bc-e4c0-493b-bbdf-017660625fb4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 04:14:34.644789   61066 system_pods.go:61] "etcd-newest-cni-509532" [668ecd06-0928-483a-b393-bae23e1269b9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 04:14:34.644801   61066 system_pods.go:61] "kube-apiserver-newest-cni-509532" [3cc26981-eaaf-4a54-ac65-5e98371efb21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 04:14:34.644812   61066 system_pods.go:61] "kube-controller-manager-newest-cni-509532" [38fb14a4-787e-490a-9049-21bf6733543b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 04:14:34.644821   61066 system_pods.go:61] "kube-proxy-k5ptq" [b2d52f71-bf33-4869-a7f5-d33183a19cce] Running
	I1219 04:14:34.644837   61066 system_pods.go:61] "kube-scheduler-newest-cni-509532" [53f913da-bb8f-4193-901b-272a4b77217c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 04:14:34.644847   61066 system_pods.go:61] "metrics-server-5d785b57d4-7sqzf" [0af927e7-5a60-42a7-adc5-638b0ac652c5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 04:14:34.644859   61066 system_pods.go:61] "storage-provisioner" [2154f643-f3b5-486f-bfc4-7355248590cd] Running
	I1219 04:14:34.644867   61066 system_pods.go:74] duration metric: took 3.373739ms to wait for pod list to return data ...
	I1219 04:14:34.644878   61066 default_sa.go:34] waiting for default service account to be created ...
	I1219 04:14:34.647226   61066 default_sa.go:45] found service account: "default"
	I1219 04:14:34.647247   61066 default_sa.go:55] duration metric: took 2.35291ms for default service account to be created ...
	I1219 04:14:34.647260   61066 kubeadm.go:587] duration metric: took 331.072692ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 04:14:34.647286   61066 node_conditions.go:102] verifying NodePressure condition ...
	I1219 04:14:34.649136   61066 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 04:14:34.649158   61066 node_conditions.go:123] node cpu capacity is 2
	I1219 04:14:34.649171   61066 node_conditions.go:105] duration metric: took 1.875766ms to run NodePressure ...
	I1219 04:14:34.649184   61066 start.go:242] waiting for startup goroutines ...
	I1219 04:14:34.684661   61066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 04:14:34.690440   61066 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 04:14:34.690464   61066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 04:14:34.703173   61066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 04:14:34.737761   61066 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 04:14:34.737791   61066 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 04:14:34.790265   61066 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 04:14:34.790287   61066 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 04:14:34.852757   61066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 04:14:34.887897   61066 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 04:14:36.013051   61066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.328350909s)
	I1219 04:14:36.013133   61066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.309931889s)
	I1219 04:14:36.111178   61066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.25838056s)
	I1219 04:14:36.111204   61066 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.223278971s)
	I1219 04:14:36.111222   61066 addons.go:500] Verifying addon metrics-server=true in "newest-cni-509532"
	I1219 04:14:36.111276   61066 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 04:14:36.114770   61066 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 04:14:36.989143   61066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 04:14:40.311413   61066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.322216791s)
	I1219 04:14:40.311501   61066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 04:14:40.700323   61066 addons.go:500] Verifying addon dashboard=true in "newest-cni-509532"
	I1219 04:14:40.703308   61066 out.go:179] * Verifying dashboard addon...
	I1219 04:14:40.705388   61066 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 04:14:40.714051   61066 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 04:14:40.714067   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:41.214289   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:41.709381   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:42.208940   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:42.711074   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:43.209100   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:43.709687   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:44.209381   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:44.709033   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:45.208335   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:45.708776   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:46.209886   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:46.708530   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:47.209371   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:47.708645   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:48.209250   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:48.708911   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:49.208441   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:49.709372   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:50.209545   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:50.708944   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:51.208438   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:51.709022   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:52.208662   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:52.709170   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:53.209170   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:53.709621   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:54.209235   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:54.708902   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:55.208961   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:55.708819   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:56.209635   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:56.709369   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:57.208990   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:57.709114   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:58.209155   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:58.708556   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:59.208920   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:59.709099   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:00.208668   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:00.709308   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:01.208791   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:01.709282   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:02.208969   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:02.709020   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:03.209562   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:03.709818   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:04.209394   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:04.710095   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:05.208341   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:05.708877   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:06.209468   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:06.709021   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:07.208884   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:07.710798   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:08.209944   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:08.709151   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:09.209372   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:09.709439   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:10.210196   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:10.709268   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:11.209953   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:11.708633   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:12.209488   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:12.709557   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:13.209528   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:13.710269   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:14.208719   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:14.709683   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:15.209748   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:15.710466   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:16.209094   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:16.708900   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:17.210178   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:17.709320   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:18.208709   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:18.711788   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:19.209147   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:19.709274   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:20.215927   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:20.709487   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:21.209636   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:21.709453   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:22.209104   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:22.709403   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:23.209951   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:23.709366   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:24.208821   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:24.709494   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:25.209361   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:25.709820   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:26.210263   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:26.708770   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:27.209796   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:27.710441   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:28.210538   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:28.709362   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:29.208745   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:29.713247   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:30.209128   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:30.709079   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:31.209001   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:31.709304   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:32.208985   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:32.708946   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:33.208932   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:33.709461   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:34.211211   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:34.710234   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:35.209227   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:35.709023   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:36.208843   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:36.708561   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:37.209466   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:37.710118   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:38.210715   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:38.709625   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:39.209486   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:39.709309   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:40.209102   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:40.708785   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:41.209503   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:41.709006   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:42.210327   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:42.709654   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:43.209327   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:43.709108   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:44.210491   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:44.709518   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:45.209472   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:45.709105   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:46.209051   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:46.709227   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:47.209758   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:47.709152   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:48.208757   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:48.709591   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:49.208784   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:49.709224   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:50.209656   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:50.709222   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:51.208915   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:51.709281   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:52.209437   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:52.709067   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:53.209388   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:53.709821   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:54.210256   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:54.709004   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:55.210468   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:55.708503   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:56.210298   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:56.708960   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:57.209547   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:57.709509   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:58.209519   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:58.709279   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:59.209362   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:59.708363   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:00.209110   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:00.708846   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:01.209401   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:01.709242   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:02.209610   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:02.708360   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:03.209720   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:03.708485   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:04.208731   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:04.709494   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:05.208815   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:05.708950   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:06.211916   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:06.708827   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:07.209434   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:07.708859   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:08.209971   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:08.709487   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:09.208814   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:09.709339   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:10.209693   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:10.709073   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:11.208882   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:11.709587   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:12.216297   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:12.708620   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:13.209710   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:13.710293   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:14.209030   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:14.709846   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:15.209755   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:15.708775   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:16.209650   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:16.710182   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:17.208561   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:17.709020   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:18.209752   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:18.709934   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:19.208768   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:19.709685   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:20.211473   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:20.708882   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:21.209970   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:21.709072   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:22.209763   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:22.709161   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:23.209199   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:23.709476   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:24.209259   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:24.708905   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:25.210557   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:25.709447   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:26.209744   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:26.709864   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:27.209781   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:27.710207   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:28.209976   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:28.709670   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:29.209701   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:29.709229   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:30.209362   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:30.708762   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:31.209196   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:31.709242   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:32.210131   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:32.708822   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:33.209731   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:33.710255   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:34.209751   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:34.709687   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:35.209508   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:35.709380   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:36.209299   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:36.710415   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:37.208972   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:37.709755   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:38.210386   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:38.708945   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:39.209705   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:39.709625   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:40.209957   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:40.709140   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:41.209723   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:41.709186   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:42.209494   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:42.708817   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:43.208986   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:43.710319   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:44.209078   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:44.708386   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:45.209690   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:45.709034   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:46.208833   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:46.709451   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:47.209201   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:47.709554   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:48.209559   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:48.709724   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:49.209297   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:49.708616   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:50.209756   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:50.708769   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:51.209737   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:51.709288   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:52.210762   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:52.709462   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:53.208546   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:53.708920   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:54.209535   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:54.708776   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:55.209801   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:55.710359   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:56.209200   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:56.708922   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:57.209223   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:57.708773   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:58.210524   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:58.710068   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:59.209268   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:59.708786   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:00.209738   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:00.709423   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:01.208944   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:01.709388   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:02.210192   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:02.708971   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:03.209341   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:03.710005   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:04.210688   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:04.709305   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:05.209187   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:05.710432   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:06.209490   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:06.710363   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:07.208742   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:07.709481   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:08.210292   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:08.708605   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:09.208834   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:09.709251   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:10.208873   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:10.709907   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:11.209307   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:11.709408   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:12.209896   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:12.708621   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:13.209743   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:13.710231   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:14.208736   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:14.709905   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:15.209118   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:15.709174   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:16.209391   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:16.709719   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:17.209628   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:17.709695   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:18.209845   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:18.708781   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:19.209135   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:19.709507   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:20.208841   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:20.710990   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:21.208724   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:21.710158   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:22.209103   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:22.710129   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:23.209775   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:23.709987   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:24.209316   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:24.709881   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:25.208928   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:25.708559   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:26.210028   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:26.710617   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:27.209052   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:27.708805   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:28.208631   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:28.710309   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:29.209622   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:29.709948   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:30.208964   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:30.709728   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:31.209967   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:31.709109   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:32.209521   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:32.709659   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:33.210734   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:33.710628   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:34.209345   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:34.711684   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:35.208987   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:35.708456   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:36.210082   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:36.709231   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:37.209017   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:37.708542   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:38.209781   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:38.709533   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:39.209334   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:39.708705   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:40.209760   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:40.710565   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:41.209403   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:41.709166   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:42.209605   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:42.710227   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:43.209790   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:43.709155   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:44.209710   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:44.709316   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:45.209305   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:45.708751   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:46.210380   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:46.709861   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:47.209176   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:47.710298   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:48.209488   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:48.709793   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:49.209720   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:49.709597   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:50.210321   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:50.710068   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:51.209343   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:51.709456   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:52.209315   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:52.710321   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:53.208905   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:53.708513   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:54.209522   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:54.710225   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:55.208988   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:55.708532   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:56.210100   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:56.709278   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:57.209475   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:57.709488   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:58.208995   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:58.709129   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:59.208642   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:59.709554   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:00.208833   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:00.709059   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:01.208813   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:01.709544   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:02.209386   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:02.709894   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:03.208830   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:03.710642   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:04.211112   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:04.709433   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:05.209336   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:05.710592   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:06.209933   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:06.709783   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:07.210824   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:07.709373   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:08.209713   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:08.709973   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:09.210300   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:09.709168   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:10.209281   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:10.709930   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:11.209444   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:11.710104   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:12.209049   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:12.709115   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:13.209977   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:13.710151   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:14.208688   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:14.709141   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:15.209021   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:15.708452   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:16.209185   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:16.709198   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:17.209195   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:17.709792   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:18.209561   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:18.709240   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:19.208680   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:19.709015   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:20.209289   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:20.709399   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:21.209051   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:21.709154   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:22.208461   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:22.709706   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:23.209290   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:23.709743   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:24.209679   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:24.710029   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:25.208905   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:25.708479   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:26.209007   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:26.708715   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:27.208750   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:27.710030   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:28.209759   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:28.710391   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:29.209501   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:29.709255   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:30.209201   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:30.708792   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:31.209051   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	
	
	==> CRI-O <==
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.084207321Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1a1db854745cdc3e34fd3bbc3ef18539f9fcd32b32e7edfcb77db507876edf83,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-kong-9849c64bd-ghhd7,Uid:051c3643-370d-478b-a0d6-5012d03a4d3e,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116470019661776,Labels:map[string]string{app: kubernetes-dashboard-kong,app.kubernetes.io/component: app,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/managed-by: Helm,app.kubernetes.io/name: kong,app.kubernetes.io/version: 3.9,helm.sh/chart: kong-2.52.0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-kong-9849c64bd-ghhd7,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 051c3643-370d-478b-a0d6-5012d03a4d3e,pod-template-hash: 9849c64bd,version: 3.9,},Annotations:map[string]string{kubernetes.io/config.seen:
2025-12-19T03:54:29.644094696Z,kubernetes.io/config.source: api,kuma.io/gateway: enabled,kuma.io/service-account-token-volume: kubernetes-dashboard-kong-token,traffic.sidecar.istio.io/includeInboundPorts: ,},RuntimeHandler:,},&PodSandbox{Id:2eb3e6089abbf93ca12d005ae2ef05931a11614ff922972ef8a48253d1847c1b,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-auth-6b55998857-99nts,Uid:be79c314-fcbc-410f-a245-ca04752aeb23,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116470016641511,Labels:map[string]string{app.kubernetes.io/component: auth,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/managed-by: Helm,app.kubernetes.io/name: kubernetes-dashboard-auth,app.kubernetes.io/part-of: kubernetes-dashboard,app.kubernetes.io/version: 1.4.0,helm.sh/chart: kubernetes-dashboard-7.14.0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-auth-6b55998857-99nts,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: be79c314-fcbc-4
10f-a245-ca04752aeb23,pod-template-hash: 6b55998857,},Annotations:map[string]string{checksum/config: ed9eece39e9fe218fa5fb9bf2428a78dc19b578c344e94d7b6271706ba6fd4ae,kubernetes.io/config.seen: 2025-12-19T03:54:29.599499609Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:924f40151977e5bfb1ccaff03f56e971844d9c175c836bb342aba5ea2f11b035,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-api-677b969f5d-xr86s,Uid:2468eb14-0ebb-45fd-abf4-63a8e1309258,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116469969038932,Labels:map[string]string{app.kubernetes.io/component: api,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/managed-by: Helm,app.kubernetes.io/name: kubernetes-dashboard-api,app.kubernetes.io/part-of: kubernetes-dashboard,app.kubernetes.io/version: 1.14.0,helm.sh/chart: kubernetes-dashboard-7.14.0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-api-677b969f5d-xr86s,io.kubernetes.pod.namespace: kubernetes-d
ashboard,io.kubernetes.pod.uid: 2468eb14-0ebb-45fd-abf4-63a8e1309258,pod-template-hash: 677b969f5d,},Annotations:map[string]string{checksum/config: e55e0dd787e7da9854c0366ab3f9b6db13be0ca8f29de374e28a6752c7f2ec0f,kubernetes.io/config.seen: 2025-12-19T03:54:29.589669443Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a40ff7ab2ef0ef653d2a04f502a3d4c85b02e047ef168058243a49cb1d8ff72,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-web-5c9f966b98-7jhl7,Uid:9e539d4c-644f-4905-a4e7-222f6b6aa324,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116469953769312,Labels:map[string]string{app.kubernetes.io/component: web,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/managed-by: Helm,app.kubernetes.io/name: kubernetes-dashboard-web,app.kubernetes.io/part-of: kubernetes-dashboard,app.kubernetes.io/version: 1.7.0,helm.sh/chart: kubernetes-dashboard-7.14.0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-web-5c9f966b98-7
jhl7,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 9e539d4c-644f-4905-a4e7-222f6b6aa324,pod-template-hash: 5c9f966b98,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T03:54:29.596715170Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a612b72097dccd9b84781240ca83758eb2f02f534e7c3334471dba5eda2b275,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz,Uid:aed4238c-131b-42c2-8c9f-f75f42efd32a,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116469952054264,Labels:map[string]string{app.kubernetes.io/component: metrics-scraper,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/managed-by: Helm,app.kubernetes.io/name: kubernetes-dashboard-metrics-scraper,app.kubernetes.io/part-of: kubernetes-dashboard,app.kubernetes.io/version: 1.2.2,helm.sh/chart: kubernetes-dashboard-7.14.0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-metrics-scraper-
7685fd8b77-gpdxz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: aed4238c-131b-42c2-8c9f-f75f42efd32a,pod-template-hash: 7685fd8b77,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T03:54:29.592785386Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6cef58f979bdcf009b1664773464c0ad2edaf1a7e687c8d62dfee39025f1247f,Metadata:&PodSandboxMetadata{Name:busybox,Uid:5641715a-fb85-45c8-b1e2-de3c394086ed,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116465761431845,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5641715a-fb85-45c8-b1e2-de3c394086ed,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T03:54:21.838819746Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0614affd1728ae851d74f833d7d34176fd6ce8e8688ac8c4f9ee867519f3c2ea,Metadata:&PodSandboxMetadata{Name:coredns-66
bc5c9577-9ptrv,Uid:22226444-faa6-420d-a862-1ef0441a80e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116465759432629,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-9ptrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22226444-faa6-420d-a862-1ef0441a80e4,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T03:54:21.838824523Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b2bb22ff667ff94ba5c3e7035762c398a5997125a1fd465d4c53b461ca2bd240,Metadata:&PodSandboxMetadata{Name:metrics-server-746fcd58dc-x74d4,Uid:e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116463963186082,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-746fcd58dc-x74d4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4a33dcc-d3b6-45a0-92d7-6cfbc5df35
b2,k8s-app: metrics-server,pod-template-hash: 746fcd58dc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T03:54:21.838834019Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:99ff9c60-2f30-457a-8cb5-e030eb64a58e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116462168175506,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storag
e-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-19T03:54:21.838831720Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ed4e137eed69f1576ef7b849ef51524cd10889219acb0222c0ad3319f593ed6b,Metadata:&PodSandboxMetadata{Name:kube-proxy-p8gvm,Uid:283607b2-9e6c-44f4-9c9d-7d713c71fb8c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116462167314974,Labels:map[string]string{controller-revision-hash: 55c7cb7b75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-p8gvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 283607b2-9e6c-44f4-9c9d-7d713c71fb8c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-19T03:54:21.838829247Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d3374678022d82759234913a106fbe11b79862f9660c0389e5a1ae91fccd69ce,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-244717,Uid:51fc709cdbf261d7f78621b653d0027b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116457717042837,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc709cdbf261d7f78621b653d0027b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.54:2379,kubernetes.io/config.hash: 51fc709cdbf261d7f78621b653d0027b,kubernetes.io/config.seen: 2025-12-19T03:54:16.875659211Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandb
ox{Id:cb79701937629a720cbd58b60479edfa60b769186b690e52241d588128c3a052,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-244717,Uid:3849b25e9ef521e7689e47039ae86b1a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116457711988331,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3849b25e9ef521e7689e47039ae86b1a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3849b25e9ef521e7689e47039ae86b1a,kubernetes.io/config.seen: 2025-12-19T03:54:16.842671617Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a4a72d74a0d7973721c035b2f02eb51fb3a397ecb63af2ca599f833e8e931d40,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-244717,Uid:42497a262dfe4f576d621089344401ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116457692186043,Labels:map[string]string{compon
ent: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42497a262dfe4f576d621089344401ac,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.54:8443,kubernetes.io/config.hash: 42497a262dfe4f576d621089344401ac,kubernetes.io/config.seen: 2025-12-19T03:54:16.842648132Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4c181a507e6b05a84a1966aebd53b5eeb271f50ebc1db5d936520d1e47685492,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-244717,Uid:bed550c60240cd3e16a8090bdf714aad,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766116457687141065,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed5
50c60240cd3e16a8090bdf714aad,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bed550c60240cd3e16a8090bdf714aad,kubernetes.io/config.seen: 2025-12-19T03:54:16.842670027Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=6d3aaace-6572-4af9-8e43-6ab060885914 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.085359048Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b322ad42-87f1-47ca-87e9-84ccf856f857 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.085438177Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b322ad42-87f1-47ca-87e9-84ccf856f857 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.085622948Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f7f1d6992811a3f809cabb4d69d174028bcee28fdbabaface79c4622f2b40bd,PodSandboxId:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116493237678965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2c0887c51c89e0e584dc73353e522ffc4668546f521d1bb624d09f93ffb862,PodSandboxId:6cef58f979bdcf009b1664773464c0ad2edaf1a7e687c8d62dfee39025f1247f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116469616408924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5641715a-fb85-45c8-b1e2-de3c394086ed,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411653e4250dfe0da57038d14bc69022addd7ec37037fc8a8955e1655974a27,PodSandboxId:0614affd1728ae851d74f833d7d34176fd6ce8e8688ac8c4f9ee867519f3c2ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766116466143726495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9ptrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22226444-faa6-420d-a862-1ef0441a80e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:954447f0c968092e3dfb23f85b41f94836775c9f1bc23d1e84e6a62e0c4f6749,PodSandboxId:ed4e137eed69f1576ef7b849ef51524cd10889219acb0222c0ad3319f593ed6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766116462507695033,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8gvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 283607b2-9e6c-44f4-9c9d-7d713c71fb8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc03b0b75719a3766a04010c15c50585e5b9d66412e39a40f987b194e653af8,PodSandboxId:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116462409761046,Labels:map[string]string{io.kubernetes.container.name: storage-provi
sioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c00fb043f114c6d4ca21d53aa260dcba5ec8b805a2caba68166dab5cad8572,PodSandboxId:4c181a507e6b05a84a1966aebd53b5eeb271f50ebc1db5d936520d1e47685492,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766116457998299600,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manag
er,io.kubernetes.pod.name: kube-controller-manager-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed550c60240cd3e16a8090bdf714aad,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e133fc618150f001c2cb9f53d2de00fb776e3bddc7a2b3672a3542025045aa71,PodSandboxId:d3374678022d82759234913a106fbe11b79862f9660c0389e5a1ae91fccd69ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTA
INER_RUNNING,CreatedAt:1766116457981525503,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc709cdbf261d7f78621b653d0027b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e68b6704fdf3c1afa48de9036c1b5beeb789d7c238d995602002285db55f9c7,PodSandboxId:cb79701937629a720cbd58b60479edfa60b769186b690e52241d588128c3a052,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766116457963694981,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3849b25e9ef521e7689e47039ae86b1a,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1d9289f2c9d67ab5f9296e09735ae1579ea00b277c7b6055c4c99d35345cd39,PodSandboxId:a4a72d74a0d7973721c035b2f02eb51fb3a397ecb63af2ca599f833e8e931d40,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3
d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766116457957635365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42497a262dfe4f576d621089344401ac,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b322ad42-87f1-47ca-87e9-84ccf856f857 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.106936688Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4667e04-9742-4c42-9f52-2ce39896c0f1 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.107010696Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4667e04-9742-4c42-9f52-2ce39896c0f1 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.108086427Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7bfa671c-23f7-4802-a2ae-8e33ca76d62a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.108555138Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117915108529752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136931,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7bfa671c-23f7-4802-a2ae-8e33ca76d62a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.109349404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4277773-9565-4742-8455-984b0b79a69f name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.109635867Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4277773-9565-4742-8455-984b0b79a69f name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.110302451Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f7f1d6992811a3f809cabb4d69d174028bcee28fdbabaface79c4622f2b40bd,PodSandboxId:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116493237678965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2c0887c51c89e0e584dc73353e522ffc4668546f521d1bb624d09f93ffb862,PodSandboxId:6cef58f979bdcf009b1664773464c0ad2edaf1a7e687c8d62dfee39025f1247f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116469616408924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5641715a-fb85-45c8-b1e2-de3c394086ed,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411653e4250dfe0da57038d14bc69022addd7ec37037fc8a8955e1655974a27,PodSandboxId:0614affd1728ae851d74f833d7d34176fd6ce8e8688ac8c4f9ee867519f3c2ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766116466143726495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9ptrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22226444-faa6-420d-a862-1ef0441a80e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:954447f0c968092e3dfb23f85b41f94836775c9f1bc23d1e84e6a62e0c4f6749,PodSandboxId:ed4e137eed69f1576ef7b849ef51524cd10889219acb0222c0ad3319f593ed6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766116462507695033,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8gvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 283607b2-9e6c-44f4-9c9d-7d713c71fb8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc03b0b75719a3766a04010c15c50585e5b9d66412e39a40f987b194e653af8,PodSandboxId:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116462409761046,Labels:map[string]string{io.kubernetes.container.name: storage-provi
sioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c00fb043f114c6d4ca21d53aa260dcba5ec8b805a2caba68166dab5cad8572,PodSandboxId:4c181a507e6b05a84a1966aebd53b5eeb271f50ebc1db5d936520d1e47685492,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766116457998299600,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manag
er,io.kubernetes.pod.name: kube-controller-manager-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed550c60240cd3e16a8090bdf714aad,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e133fc618150f001c2cb9f53d2de00fb776e3bddc7a2b3672a3542025045aa71,PodSandboxId:d3374678022d82759234913a106fbe11b79862f9660c0389e5a1ae91fccd69ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTA
INER_RUNNING,CreatedAt:1766116457981525503,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc709cdbf261d7f78621b653d0027b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e68b6704fdf3c1afa48de9036c1b5beeb789d7c238d995602002285db55f9c7,PodSandboxId:cb79701937629a720cbd58b60479edfa60b769186b690e52241d588128c3a052,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766116457963694981,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3849b25e9ef521e7689e47039ae86b1a,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1d9289f2c9d67ab5f9296e09735ae1579ea00b277c7b6055c4c99d35345cd39,PodSandboxId:a4a72d74a0d7973721c035b2f02eb51fb3a397ecb63af2ca599f833e8e931d40,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3
d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766116457957635365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42497a262dfe4f576d621089344401ac,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4277773-9565-4742-8455-984b0b79a69f name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.137567389Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5ca04d8-de3e-466e-8d01-259f62cbf6c0 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.137770829Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5ca04d8-de3e-466e-8d01-259f62cbf6c0 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.138974753Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=596eedca-13c1-4ade-bfd9-915bd2d5728a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.139453824Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117915139373913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136931,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=596eedca-13c1-4ade-bfd9-915bd2d5728a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.140417508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=668264d3-8b1d-49aa-9b8c-8c4a981cffb4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.140510563Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=668264d3-8b1d-49aa-9b8c-8c4a981cffb4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.140726636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f7f1d6992811a3f809cabb4d69d174028bcee28fdbabaface79c4622f2b40bd,PodSandboxId:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116493237678965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2c0887c51c89e0e584dc73353e522ffc4668546f521d1bb624d09f93ffb862,PodSandboxId:6cef58f979bdcf009b1664773464c0ad2edaf1a7e687c8d62dfee39025f1247f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116469616408924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5641715a-fb85-45c8-b1e2-de3c394086ed,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411653e4250dfe0da57038d14bc69022addd7ec37037fc8a8955e1655974a27,PodSandboxId:0614affd1728ae851d74f833d7d34176fd6ce8e8688ac8c4f9ee867519f3c2ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766116466143726495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9ptrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22226444-faa6-420d-a862-1ef0441a80e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:954447f0c968092e3dfb23f85b41f94836775c9f1bc23d1e84e6a62e0c4f6749,PodSandboxId:ed4e137eed69f1576ef7b849ef51524cd10889219acb0222c0ad3319f593ed6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766116462507695033,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8gvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 283607b2-9e6c-44f4-9c9d-7d713c71fb8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc03b0b75719a3766a04010c15c50585e5b9d66412e39a40f987b194e653af8,PodSandboxId:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116462409761046,Labels:map[string]string{io.kubernetes.container.name: storage-provi
sioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c00fb043f114c6d4ca21d53aa260dcba5ec8b805a2caba68166dab5cad8572,PodSandboxId:4c181a507e6b05a84a1966aebd53b5eeb271f50ebc1db5d936520d1e47685492,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766116457998299600,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manag
er,io.kubernetes.pod.name: kube-controller-manager-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed550c60240cd3e16a8090bdf714aad,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e133fc618150f001c2cb9f53d2de00fb776e3bddc7a2b3672a3542025045aa71,PodSandboxId:d3374678022d82759234913a106fbe11b79862f9660c0389e5a1ae91fccd69ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTA
INER_RUNNING,CreatedAt:1766116457981525503,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc709cdbf261d7f78621b653d0027b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e68b6704fdf3c1afa48de9036c1b5beeb789d7c238d995602002285db55f9c7,PodSandboxId:cb79701937629a720cbd58b60479edfa60b769186b690e52241d588128c3a052,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766116457963694981,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3849b25e9ef521e7689e47039ae86b1a,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1d9289f2c9d67ab5f9296e09735ae1579ea00b277c7b6055c4c99d35345cd39,PodSandboxId:a4a72d74a0d7973721c035b2f02eb51fb3a397ecb63af2ca599f833e8e931d40,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3
d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766116457957635365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42497a262dfe4f576d621089344401ac,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=668264d3-8b1d-49aa-9b8c-8c4a981cffb4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.173793940Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=381a327f-b47a-4559-a435-c07452d738ae name=/runtime.v1.RuntimeService/Version
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.173875056Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=381a327f-b47a-4559-a435-c07452d738ae name=/runtime.v1.RuntimeService/Version
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.175553346Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f52362b-f7df-46b1-a437-72636869890b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.176455725Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117915176408105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136931,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f52362b-f7df-46b1-a437-72636869890b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.178917404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4be4ba06-534a-4b4b-9936-74318e168dfc name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.179006301Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4be4ba06-534a-4b4b-9936-74318e168dfc name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:18:35 embed-certs-244717 crio[890]: time="2025-12-19 04:18:35.179624060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f7f1d6992811a3f809cabb4d69d174028bcee28fdbabaface79c4622f2b40bd,PodSandboxId:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116493237678965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2c0887c51c89e0e584dc73353e522ffc4668546f521d1bb624d09f93ffb862,PodSandboxId:6cef58f979bdcf009b1664773464c0ad2edaf1a7e687c8d62dfee39025f1247f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116469616408924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5641715a-fb85-45c8-b1e2-de3c394086ed,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411653e4250dfe0da57038d14bc69022addd7ec37037fc8a8955e1655974a27,PodSandboxId:0614affd1728ae851d74f833d7d34176fd6ce8e8688ac8c4f9ee867519f3c2ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766116466143726495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9ptrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22226444-faa6-420d-a862-1ef0441a80e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:954447f0c968092e3dfb23f85b41f94836775c9f1bc23d1e84e6a62e0c4f6749,PodSandboxId:ed4e137eed69f1576ef7b849ef51524cd10889219acb0222c0ad3319f593ed6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766116462507695033,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8gvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 283607b2-9e6c-44f4-9c9d-7d713c71fb8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc03b0b75719a3766a04010c15c50585e5b9d66412e39a40f987b194e653af8,PodSandboxId:fd26055d1fc31c3bdf0430cb1b11d62182c2db3fbaaa27195abc721cdbb036d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116462409761046,Labels:map[string]string{io.kubernetes.container.name: storage-provi
sioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ff9c60-2f30-457a-8cb5-e030eb64a58e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c00fb043f114c6d4ca21d53aa260dcba5ec8b805a2caba68166dab5cad8572,PodSandboxId:4c181a507e6b05a84a1966aebd53b5eeb271f50ebc1db5d936520d1e47685492,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766116457998299600,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manag
er,io.kubernetes.pod.name: kube-controller-manager-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bed550c60240cd3e16a8090bdf714aad,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e133fc618150f001c2cb9f53d2de00fb776e3bddc7a2b3672a3542025045aa71,PodSandboxId:d3374678022d82759234913a106fbe11b79862f9660c0389e5a1ae91fccd69ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTA
INER_RUNNING,CreatedAt:1766116457981525503,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fc709cdbf261d7f78621b653d0027b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e68b6704fdf3c1afa48de9036c1b5beeb789d7c238d995602002285db55f9c7,PodSandboxId:cb79701937629a720cbd58b60479edfa60b769186b690e52241d588128c3a052,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766116457963694981,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3849b25e9ef521e7689e47039ae86b1a,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1d9289f2c9d67ab5f9296e09735ae1579ea00b277c7b6055c4c99d35345cd39,PodSandboxId:a4a72d74a0d7973721c035b2f02eb51fb3a397ecb63af2ca599f833e8e931d40,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3
d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766116457957635365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-244717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42497a262dfe4f576d621089344401ac,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4be4ba06-534a-4b4b-9936-74318e168dfc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	7f7f1d6992811       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      23 minutes ago      Running             storage-provisioner       2                   fd26055d1fc31       storage-provisioner                          kube-system
	5e2c0887c51c8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   24 minutes ago      Running             busybox                   1                   6cef58f979bdc       busybox                                      default
	4411653e4250d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      24 minutes ago      Running             coredns                   1                   0614affd1728a       coredns-66bc5c9577-9ptrv                     kube-system
	954447f0c9680       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      24 minutes ago      Running             kube-proxy                1                   ed4e137eed69f       kube-proxy-p8gvm                             kube-system
	ffc03b0b75719       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      24 minutes ago      Exited              storage-provisioner       1                   fd26055d1fc31       storage-provisioner                          kube-system
	d5c00fb043f11       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      24 minutes ago      Running             kube-controller-manager   1                   4c181a507e6b0       kube-controller-manager-embed-certs-244717   kube-system
	e133fc618150f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      24 minutes ago      Running             etcd                      1                   d3374678022d8       etcd-embed-certs-244717                      kube-system
	2e68b6704fdf3       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      24 minutes ago      Running             kube-scheduler            1                   cb79701937629       kube-scheduler-embed-certs-244717            kube-system
	f1d9289f2c9d6       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      24 minutes ago      Running             kube-apiserver            1                   a4a72d74a0d79       kube-apiserver-embed-certs-244717            kube-system
	
	
	==> coredns [4411653e4250dfe0da57038d14bc69022addd7ec37037fc8a8955e1655974a27] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38615 - 52038 "HINFO IN 3058748700005490112.3296782353744935446. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032425758s
	
	
	==> describe nodes <==
	Name:               embed-certs-244717
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-244717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=embed-certs-244717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_51_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:51:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-244717
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 04:18:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 04:16:58 +0000   Fri, 19 Dec 2025 03:51:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 04:16:58 +0000   Fri, 19 Dec 2025 03:51:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 04:16:58 +0000   Fri, 19 Dec 2025 03:51:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 04:16:58 +0000   Fri, 19 Dec 2025 03:54:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.54
	  Hostname:    embed-certs-244717
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 a2c78c5e7dae44bfa155fa249ad61e2f
	  System UUID:                a2c78c5e-7dae-44bf-a155-fa249ad61e2f
	  Boot ID:                    f99a3e1d-0ea3-4c69-8edf-039724ce6d90
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 coredns-66bc5c9577-9ptrv                                 100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     26m
	  kube-system                 etcd-embed-certs-244717                                  100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         26m
	  kube-system                 kube-apiserver-embed-certs-244717                        250m (12%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-controller-manager-embed-certs-244717               200m (10%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-proxy-p8gvm                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-scheduler-embed-certs-244717                        100m (5%)     0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 metrics-server-746fcd58dc-x74d4                          100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         26m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kubernetes-dashboard        kubernetes-dashboard-api-677b969f5d-xr86s                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    24m
	  kubernetes-dashboard        kubernetes-dashboard-auth-6b55998857-99nts               100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    24m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-ghhd7                0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz    100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    24m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-7jhl7                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1250m (62%)   1 (50%)
	  memory             1170Mi (39%)  1770Mi (59%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26m                kube-proxy       
	  Normal   Starting                 24m                kube-proxy       
	  Normal   Starting                 27m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  27m (x8 over 27m)  kubelet          Node embed-certs-244717 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27m (x8 over 27m)  kubelet          Node embed-certs-244717 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27m (x7 over 27m)  kubelet          Node embed-certs-244717 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    26m                kubelet          Node embed-certs-244717 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  26m                kubelet          Node embed-certs-244717 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     26m                kubelet          Node embed-certs-244717 status is now: NodeHasSufficientPID
	  Normal   Starting                 26m                kubelet          Starting kubelet.
	  Normal   NodeReady                26m                kubelet          Node embed-certs-244717 status is now: NodeReady
	  Normal   RegisteredNode           26m                node-controller  Node embed-certs-244717 event: Registered Node embed-certs-244717 in Controller
	  Normal   Starting                 24m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node embed-certs-244717 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node embed-certs-244717 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node embed-certs-244717 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 24m                kubelet          Node embed-certs-244717 has been rebooted, boot id: f99a3e1d-0ea3-4c69-8edf-039724ce6d90
	  Normal   RegisteredNode           24m                node-controller  Node embed-certs-244717 event: Registered Node embed-certs-244717 in Controller
	
	
	==> dmesg <==
	[Dec19 03:53] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Dec19 03:54] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005578] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.701037] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000021] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.115440] kauditd_printk_skb: 88 callbacks suppressed
	[  +5.696306] kauditd_printk_skb: 196 callbacks suppressed
	[  +2.250665] kauditd_printk_skb: 275 callbacks suppressed
	[  +6.334318] kauditd_printk_skb: 203 callbacks suppressed
	[Dec19 03:55] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [e133fc618150f001c2cb9f53d2de00fb776e3bddc7a2b3672a3542025045aa71] <==
	{"level":"warn","ts":"2025-12-19T03:54:54.789793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.816072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.826726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.840338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.860416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.873869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.889471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:54:54.912791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59894","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T04:04:19.428662Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1080}
	{"level":"info","ts":"2025-12-19T04:04:19.452341Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1080,"took":"23.334454ms","hash":935815186,"current-db-size-bytes":4321280,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":1945600,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-12-19T04:04:19.452450Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":935815186,"revision":1080,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T04:09:19.435106Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1440}
	{"level":"info","ts":"2025-12-19T04:09:19.439184Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1440,"took":"3.70438ms","hash":1867608454,"current-db-size-bytes":4321280,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":2797568,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2025-12-19T04:09:19.439801Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1867608454,"revision":1440,"compact-revision":1080}
	{"level":"info","ts":"2025-12-19T04:12:30.291943Z","caller":"traceutil/trace.go:172","msg":"trace[1880746599] linearizableReadLoop","detail":"{readStateIndex:2328; appliedIndex:2328; }","duration":"135.208422ms","start":"2025-12-19T04:12:30.156688Z","end":"2025-12-19T04:12:30.291896Z","steps":["trace[1880746599] 'read index received'  (duration: 135.201457ms)","trace[1880746599] 'applied index is now lower than readState.Index'  (duration: 6.241µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T04:12:30.292065Z","caller":"traceutil/trace.go:172","msg":"trace[230746839] transaction","detail":"{read_only:false; response_revision:2047; number_of_response:1; }","duration":"141.383145ms","start":"2025-12-19T04:12:30.150672Z","end":"2025-12-19T04:12:30.292055Z","steps":["trace[230746839] 'process raft request'  (duration: 141.265218ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T04:12:30.292289Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.481955ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T04:12:30.292410Z","caller":"traceutil/trace.go:172","msg":"trace[2000673964] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2047; }","duration":"135.715281ms","start":"2025-12-19T04:12:30.156685Z","end":"2025-12-19T04:12:30.292400Z","steps":["trace[2000673964] 'agreement among raft nodes before linearized reading'  (duration: 135.44496ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T04:12:30.292579Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.708039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T04:12:30.292610Z","caller":"traceutil/trace.go:172","msg":"trace[294340806] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2047; }","duration":"108.756615ms","start":"2025-12-19T04:12:30.183844Z","end":"2025-12-19T04:12:30.292600Z","steps":["trace[294340806] 'agreement among raft nodes before linearized reading'  (duration: 108.687072ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T04:12:30.292695Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.682303ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T04:12:30.292720Z","caller":"traceutil/trace.go:172","msg":"trace[1483377647] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2047; }","duration":"100.711938ms","start":"2025-12-19T04:12:30.192001Z","end":"2025-12-19T04:12:30.292713Z","steps":["trace[1483377647] 'agreement among raft nodes before linearized reading'  (duration: 100.669152ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T04:14:19.442267Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1868}
	{"level":"info","ts":"2025-12-19T04:14:19.447314Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1868,"took":"4.433278ms","hash":2019657403,"current-db-size-bytes":4321280,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":2625536,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2025-12-19T04:14:19.447390Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2019657403,"revision":1868,"compact-revision":1440}
	
	
	==> kernel <==
	 04:18:35 up 24 min,  0 users,  load average: 0.62, 0.47, 0.31
	Linux embed-certs-244717 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [f1d9289f2c9d67ab5f9296e09735ae1579ea00b277c7b6055c4c99d35345cd39] <==
	I1219 04:14:22.196579       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:14:22.196458       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:14:22.196633       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 04:14:22.197886       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:15:22.197667       1 handler_proxy.go:99] no RequestInfo found in the context
	W1219 04:15:22.197969       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:15:22.198029       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 04:15:22.198042       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1219 04:15:22.197970       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 04:15:22.200013       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:17:22.199147       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:17:22.199312       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 04:17:22.199347       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:17:22.200711       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:17:22.200747       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 04:17:22.200757       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d5c00fb043f114c6d4ca21d53aa260dcba5ec8b805a2caba68166dab5cad8572] <==
	I1219 04:12:26.291058       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:12:56.151006       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:12:56.299351       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:13:26.162445       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:13:26.308399       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:13:56.167498       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:13:56.317273       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:14:26.173118       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:14:26.326411       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:14:56.178033       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:14:56.333929       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:15:26.183595       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:15:26.344079       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:15:56.188945       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:15:56.353724       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:16:26.195729       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:16:26.364278       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:16:56.200604       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:16:56.372626       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:17:26.208773       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:17:26.380410       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:17:56.213468       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:17:56.389271       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:18:26.218456       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:18:26.397207       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [954447f0c968092e3dfb23f85b41f94836775c9f1bc23d1e84e6a62e0c4f6749] <==
	I1219 03:54:22.909150       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:54:23.010209       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:54:23.010322       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.54"]
	E1219 03:54:23.010457       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:54:23.198968       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:54:23.199030       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:54:23.199071       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:54:23.287511       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:54:23.287871       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:54:23.287891       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:54:23.319002       1 config.go:309] "Starting node config controller"
	I1219 03:54:23.319040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:54:23.319052       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:54:23.328953       1 config.go:200] "Starting service config controller"
	I1219 03:54:23.328987       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:54:23.328996       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:54:23.329022       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:54:23.329027       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:54:23.329161       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:54:23.329185       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:54:23.429415       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:54:23.429426       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2e68b6704fdf3c1afa48de9036c1b5beeb789d7c238d995602002285db55f9c7] <==
	I1219 03:54:18.845327       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:54:21.102885       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:54:21.102919       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:54:21.102930       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:54:21.102936       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:54:21.242281       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:54:21.242765       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:54:21.247638       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:54:21.247732       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:54:21.248480       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:54:21.248625       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:54:21.348549       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 04:18:05 embed-certs-244717 kubelet[1246]: E1219 04:18:05.936151    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz" podUID="aed4238c-131b-42c2-8c9f-f75f42efd32a"
	Dec 19 04:18:06 embed-certs-244717 kubelet[1246]: E1219 04:18:06.938100    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: reading manifest 1.7.0 in docker.io/kubernetesui/dashboard-web: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-7jhl7" podUID="9e539d4c-644f-4905-a4e7-222f6b6aa324"
	Dec 19 04:18:06 embed-certs-244717 kubelet[1246]: E1219 04:18:06.944697    1246 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 04:18:06 embed-certs-244717 kubelet[1246]: E1219 04:18:06.944735    1246 kuberuntime_image.go:43] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 04:18:06 embed-certs-244717 kubelet[1246]: E1219 04:18:06.944799    1246 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-x74d4_kube-system(e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Dec 19 04:18:06 embed-certs-244717 kubelet[1246]: E1219 04:18:06.944835    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-x74d4" podUID="e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2"
	Dec 19 04:18:07 embed-certs-244717 kubelet[1246]: E1219 04:18:07.269152    1246 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117887268194683  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:18:07 embed-certs-244717 kubelet[1246]: E1219 04:18:07.269357    1246 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117887268194683  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:18:07 embed-certs-244717 kubelet[1246]: E1219 04:18:07.936951    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-6b55998857-99nts" podUID="be79c314-fcbc-410f-a245-ca04752aeb23"
	Dec 19 04:18:16 embed-certs-244717 kubelet[1246]: E1219 04:18:16.937048    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz" podUID="aed4238c-131b-42c2-8c9f-f75f42efd32a"
	Dec 19 04:18:17 embed-certs-244717 kubelet[1246]: E1219 04:18:17.271616    1246 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117897271285409  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:18:17 embed-certs-244717 kubelet[1246]: E1219 04:18:17.271655    1246 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117897271285409  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:18:17 embed-certs-244717 kubelet[1246]: E1219 04:18:17.935834    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: reading manifest 1.14.0 in docker.io/kubernetesui/dashboard-api: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-677b969f5d-xr86s" podUID="2468eb14-0ebb-45fd-abf4-63a8e1309258"
	Dec 19 04:18:17 embed-certs-244717 kubelet[1246]: E1219 04:18:17.937409    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-x74d4" podUID="e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2"
	Dec 19 04:18:18 embed-certs-244717 kubelet[1246]: E1219 04:18:18.940431    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-6b55998857-99nts" podUID="be79c314-fcbc-410f-a245-ca04752aeb23"
	Dec 19 04:18:19 embed-certs-244717 kubelet[1246]: E1219 04:18:19.936354    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: reading manifest 3.9 in docker.io/library/kong: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-ghhd7" podUID="051c3643-370d-478b-a0d6-5012d03a4d3e"
	Dec 19 04:18:21 embed-certs-244717 kubelet[1246]: E1219 04:18:21.936517    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: reading manifest 1.7.0 in docker.io/kubernetesui/dashboard-web: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-7jhl7" podUID="9e539d4c-644f-4905-a4e7-222f6b6aa324"
	Dec 19 04:18:27 embed-certs-244717 kubelet[1246]: E1219 04:18:27.273028    1246 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117907272742073  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:18:27 embed-certs-244717 kubelet[1246]: E1219 04:18:27.273050    1246 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117907272742073  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:18:27 embed-certs-244717 kubelet[1246]: E1219 04:18:27.935859    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz" podUID="aed4238c-131b-42c2-8c9f-f75f42efd32a"
	Dec 19 04:18:30 embed-certs-244717 kubelet[1246]: E1219 04:18:30.935760    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-6b55998857-99nts" podUID="be79c314-fcbc-410f-a245-ca04752aeb23"
	Dec 19 04:18:30 embed-certs-244717 kubelet[1246]: E1219 04:18:30.936882    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: reading manifest 1.14.0 in docker.io/kubernetesui/dashboard-api: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-677b969f5d-xr86s" podUID="2468eb14-0ebb-45fd-abf4-63a8e1309258"
	Dec 19 04:18:32 embed-certs-244717 kubelet[1246]: E1219 04:18:32.937508    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-x74d4" podUID="e4a33dcc-d3b6-45a0-92d7-6cfbc5df35b2"
	Dec 19 04:18:33 embed-certs-244717 kubelet[1246]: E1219 04:18:33.936205    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: reading manifest 1.7.0 in docker.io/kubernetesui/dashboard-web: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-7jhl7" podUID="9e539d4c-644f-4905-a4e7-222f6b6aa324"
	Dec 19 04:18:34 embed-certs-244717 kubelet[1246]: E1219 04:18:34.938018    1246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: reading manifest 3.9 in docker.io/library/kong: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-ghhd7" podUID="051c3643-370d-478b-a0d6-5012d03a4d3e"
	
	
	==> storage-provisioner [7f7f1d6992811a3f809cabb4d69d174028bcee28fdbabaface79c4622f2b40bd] <==
	W1219 04:18:10.039905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:12.044611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:12.049969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:14.055097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:14.060560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:16.062975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:16.067425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:18.070467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:18.080600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:20.083589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:20.090694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:22.094763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:22.099365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:24.102287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:24.107677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:26.111017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:26.120060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:28.122875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:28.127480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:30.130870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:30.139043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:32.142800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:32.148168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:34.151500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:34.160198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ffc03b0b75719a3766a04010c15c50585e5b9d66412e39a40f987b194e653af8] <==
	I1219 03:54:22.640920       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:54:52.653878       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-244717 -n embed-certs-244717
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-244717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-x74d4 kubernetes-dashboard-api-677b969f5d-xr86s kubernetes-dashboard-auth-6b55998857-99nts kubernetes-dashboard-kong-9849c64bd-ghhd7 kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz kubernetes-dashboard-web-5c9f966b98-7jhl7
helpers_test.go:283: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context embed-certs-244717 describe pod metrics-server-746fcd58dc-x74d4 kubernetes-dashboard-api-677b969f5d-xr86s kubernetes-dashboard-auth-6b55998857-99nts kubernetes-dashboard-kong-9849c64bd-ghhd7 kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz kubernetes-dashboard-web-5c9f966b98-7jhl7
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context embed-certs-244717 describe pod metrics-server-746fcd58dc-x74d4 kubernetes-dashboard-api-677b969f5d-xr86s kubernetes-dashboard-auth-6b55998857-99nts kubernetes-dashboard-kong-9849c64bd-ghhd7 kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz kubernetes-dashboard-web-5c9f966b98-7jhl7: exit status 1 (67.088964ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-x74d4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-api-677b969f5d-xr86s" not found
	Error from server (NotFound): pods "kubernetes-dashboard-auth-6b55998857-99nts" not found
	Error from server (NotFound): pods "kubernetes-dashboard-kong-9849c64bd-ghhd7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-web-5c9f966b98-7jhl7" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context embed-certs-244717 describe pod metrics-server-746fcd58dc-x74d4 kubernetes-dashboard-api-677b969f5d-xr86s kubernetes-dashboard-auth-6b55998857-99nts kubernetes-dashboard-kong-9849c64bd-ghhd7 kubernetes-dashboard-metrics-scraper-7685fd8b77-gpdxz kubernetes-dashboard-web-5c9f966b98-7jhl7: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (541.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (541.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 04:10:24.934069    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:10:41.562536    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:10:45.208372    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:10:46.188830    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:10:47.649925    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:10:51.406918    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:11:47.980487    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:11:53.473519    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-168174 -n default-k8s-diff-port-168174
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-12-19 04:19:03.173627326 +0000 UTC m=+6846.211806641
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-168174 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-168174 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (61.095887ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "dashboard-metrics-scraper" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-168174 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-168174 -n default-k8s-diff-port-168174
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-168174 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-168174 logs -n 25: (1.010112677s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                       ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ addons  │ enable dashboard -p embed-certs-244717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 03:53 UTC │
	│ start   │ -p embed-certs-244717 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 03:53 UTC │ 19 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-168174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                          │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ start   │ -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-168174 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 04:01 UTC │
	│ image   │ old-k8s-version-094166 image list --format=json                                                                                                                                                                                                  │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ pause   │ -p old-k8s-version-094166 --alsologtostderr -v=1                                                                                                                                                                                                 │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ unpause │ -p old-k8s-version-094166 --alsologtostderr -v=1                                                                                                                                                                                                 │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ delete  │ -p old-k8s-version-094166                                                                                                                                                                                                                        │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ delete  │ -p old-k8s-version-094166                                                                                                                                                                                                                        │ old-k8s-version-094166       │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ start   │ -p newest-cni-509532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-509532            │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ addons  │ enable metrics-server -p newest-cni-509532 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                          │ newest-cni-509532            │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:12 UTC │
	│ stop    │ -p newest-cni-509532 --alsologtostderr -v=3                                                                                                                                                                                                      │ newest-cni-509532            │ jenkins │ v1.37.0 │ 19 Dec 25 04:12 UTC │ 19 Dec 25 04:14 UTC │
	│ addons  │ enable dashboard -p newest-cni-509532 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                     │ newest-cni-509532            │ jenkins │ v1.37.0 │ 19 Dec 25 04:14 UTC │ 19 Dec 25 04:14 UTC │
	│ start   │ -p newest-cni-509532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-509532            │ jenkins │ v1.37.0 │ 19 Dec 25 04:14 UTC │                     │
	│ image   │ no-preload-298059 image list --format=json                                                                                                                                                                                                       │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	│ pause   │ -p no-preload-298059 --alsologtostderr -v=1                                                                                                                                                                                                      │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	│ unpause │ -p no-preload-298059 --alsologtostderr -v=1                                                                                                                                                                                                      │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	│ delete  │ -p no-preload-298059                                                                                                                                                                                                                             │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	│ delete  │ -p no-preload-298059                                                                                                                                                                                                                             │ no-preload-298059            │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	│ delete  │ -p guest-783207                                                                                                                                                                                                                                  │ guest-783207                 │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	│ image   │ embed-certs-244717 image list --format=json                                                                                                                                                                                                      │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	│ pause   │ -p embed-certs-244717 --alsologtostderr -v=1                                                                                                                                                                                                     │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	│ unpause │ -p embed-certs-244717 --alsologtostderr -v=1                                                                                                                                                                                                     │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	│ delete  │ -p embed-certs-244717                                                                                                                                                                                                                            │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	│ delete  │ -p embed-certs-244717                                                                                                                                                                                                                            │ embed-certs-244717           │ jenkins │ v1.37.0 │ 19 Dec 25 04:18 UTC │ 19 Dec 25 04:18 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 04:14:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 04:14:06.361038   61066 out.go:360] Setting OutFile to fd 1 ...
	I1219 04:14:06.361124   61066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 04:14:06.361131   61066 out.go:374] Setting ErrFile to fd 2...
	I1219 04:14:06.361135   61066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 04:14:06.361336   61066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 04:14:06.361764   61066 out.go:368] Setting JSON to false
	I1219 04:14:06.362626   61066 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6990,"bootTime":1766110656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 04:14:06.362675   61066 start.go:143] virtualization: kvm guest
	I1219 04:14:06.364211   61066 out.go:179] * [newest-cni-509532] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 04:14:06.365123   61066 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 04:14:06.365107   61066 notify.go:221] Checking for updates...
	I1219 04:14:06.366901   61066 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 04:14:06.367890   61066 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 04:14:06.368902   61066 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 04:14:06.369808   61066 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 04:14:06.370728   61066 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 04:14:06.372060   61066 config.go:182] Loaded profile config "newest-cni-509532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 04:14:06.372737   61066 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 04:14:06.412127   61066 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 04:14:06.413168   61066 start.go:309] selected driver: kvm2
	I1219 04:14:06.413184   61066 start.go:928] validating driver "kvm2" against &{Name:newest-cni-509532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:newest-cni-509532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] L
istenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 04:14:06.413290   61066 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 04:14:06.414194   61066 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 04:14:06.414228   61066 cni.go:84] Creating CNI manager for ""
	I1219 04:14:06.414281   61066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 04:14:06.414315   61066 start.go:353] cluster config:
	{Name:newest-cni-509532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-509532 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 04:14:06.414395   61066 iso.go:125] acquiring lock: {Name:mk42290af04a74a4dddf27fa33aac85c9bdccfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 04:14:06.415469   61066 out.go:179] * Starting "newest-cni-509532" primary control-plane node in "newest-cni-509532" cluster
	I1219 04:14:06.416441   61066 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 04:14:06.416466   61066 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1219 04:14:06.416473   61066 cache.go:65] Caching tarball of preloaded images
	I1219 04:14:06.416548   61066 preload.go:238] Found /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 04:14:06.416559   61066 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1219 04:14:06.416671   61066 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/config.json ...
	I1219 04:14:06.416906   61066 start.go:360] acquireMachinesLock for newest-cni-509532: {Name:mk229398d29442d4d52885bbac963e5004bfbfba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 04:14:06.416959   61066 start.go:364] duration metric: took 28.485µs to acquireMachinesLock for "newest-cni-509532"
	I1219 04:14:06.416976   61066 start.go:96] Skipping create...Using existing machine configuration
	I1219 04:14:06.416986   61066 fix.go:54] fixHost starting: 
	I1219 04:14:06.418488   61066 fix.go:112] recreateIfNeeded on newest-cni-509532: state=Stopped err=<nil>
	W1219 04:14:06.418507   61066 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 04:14:06.419900   61066 out.go:252] * Restarting existing kvm2 VM for "newest-cni-509532" ...
	I1219 04:14:06.419939   61066 main.go:144] libmachine: starting domain...
	I1219 04:14:06.419951   61066 main.go:144] libmachine: ensuring networks are active...
	I1219 04:14:06.420639   61066 main.go:144] libmachine: Ensuring network default is active
	I1219 04:14:06.421075   61066 main.go:144] libmachine: Ensuring network mk-newest-cni-509532 is active
	I1219 04:14:06.421699   61066 main.go:144] libmachine: getting domain XML...
	I1219 04:14:06.423077   61066 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>newest-cni-509532</name>
	  <uuid>3bcc174c-f6d6-4825-be3a-2b994ab26c4e</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/newest-cni-509532.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a0:99:c3'/>
	      <source network='mk-newest-cni-509532'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:78:17:8e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 04:14:07.744358   61066 main.go:144] libmachine: waiting for domain to start...
	I1219 04:14:07.745789   61066 main.go:144] libmachine: domain is now running
	I1219 04:14:07.745805   61066 main.go:144] libmachine: waiting for IP...
	I1219 04:14:07.746502   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:07.747078   61066 main.go:144] libmachine: domain newest-cni-509532 has current primary IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:07.747096   61066 main.go:144] libmachine: found domain IP: 192.168.61.70
	I1219 04:14:07.747104   61066 main.go:144] libmachine: reserving static IP address...
	I1219 04:14:07.747490   61066 main.go:144] libmachine: found host DHCP lease matching {name: "newest-cni-509532", mac: "52:54:00:a0:99:c3", ip: "192.168.61.70"} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:12:20 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:07.747516   61066 main.go:144] libmachine: skip adding static IP to network mk-newest-cni-509532 - found existing host DHCP lease matching {name: "newest-cni-509532", mac: "52:54:00:a0:99:c3", ip: "192.168.61.70"}
	I1219 04:14:07.747523   61066 main.go:144] libmachine: reserved static IP address 192.168.61.70 for domain newest-cni-509532
	I1219 04:14:07.747527   61066 main.go:144] libmachine: waiting for SSH...
	I1219 04:14:07.747532   61066 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 04:14:07.749941   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:07.750247   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:12:20 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:07.750277   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:07.750441   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:07.750665   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:07.750676   61066 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 04:14:10.861828   61066 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.61.70:22: connect: no route to host
	I1219 04:14:16.942890   61066 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.61.70:22: connect: no route to host
	I1219 04:14:20.046083   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 04:14:20.050093   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.050503   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.050526   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.050762   61066 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/config.json ...
	I1219 04:14:20.050940   61066 machine.go:94] provisionDockerMachine start ...
	I1219 04:14:20.053514   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.054009   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.054062   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.054281   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:20.054610   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:20.054627   61066 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 04:14:20.159350   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 04:14:20.159379   61066 buildroot.go:166] provisioning hostname "newest-cni-509532"
	I1219 04:14:20.162396   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.162960   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.163001   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.163165   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:20.163399   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:20.163419   61066 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-509532 && echo "newest-cni-509532" | sudo tee /etc/hostname
	I1219 04:14:20.284167   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-509532
	
	I1219 04:14:20.287544   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.287971   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.287994   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.288136   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:20.288322   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:20.288338   61066 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-509532' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-509532/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-509532' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 04:14:20.401704   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 04:14:20.401728   61066 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5010/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5010/.minikube}
	I1219 04:14:20.401744   61066 buildroot.go:174] setting up certificates
	I1219 04:14:20.401752   61066 provision.go:84] configureAuth start
	I1219 04:14:20.404963   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.405393   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.405415   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.407804   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.408151   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.408185   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.408357   61066 provision.go:143] copyHostCerts
	I1219 04:14:20.408419   61066 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem, removing ...
	I1219 04:14:20.408444   61066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem
	I1219 04:14:20.408538   61066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/ca.pem (1082 bytes)
	I1219 04:14:20.408706   61066 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem, removing ...
	I1219 04:14:20.408721   61066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem
	I1219 04:14:20.408775   61066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/cert.pem (1123 bytes)
	I1219 04:14:20.408860   61066 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem, removing ...
	I1219 04:14:20.408877   61066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem
	I1219 04:14:20.408925   61066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5010/.minikube/key.pem (1679 bytes)
	I1219 04:14:20.409014   61066 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem org=jenkins.newest-cni-509532 san=[127.0.0.1 192.168.61.70 localhost minikube newest-cni-509532]
	I1219 04:14:20.479369   61066 provision.go:177] copyRemoteCerts
	I1219 04:14:20.479428   61066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 04:14:20.481882   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.482182   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.482203   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.482321   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:20.566454   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 04:14:20.595753   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1219 04:14:20.622921   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 04:14:20.656657   61066 provision.go:87] duration metric: took 254.891587ms to configureAuth
	I1219 04:14:20.656688   61066 buildroot.go:189] setting minikube options for container-runtime
	I1219 04:14:20.656898   61066 config.go:182] Loaded profile config "newest-cni-509532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 04:14:20.659654   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.660055   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.660074   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.660268   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:20.660466   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:20.660480   61066 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 04:14:20.908219   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 04:14:20.908261   61066 machine.go:97] duration metric: took 857.295481ms to provisionDockerMachine
	I1219 04:14:20.908277   61066 start.go:293] postStartSetup for "newest-cni-509532" (driver="kvm2")
	I1219 04:14:20.908289   61066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 04:14:20.908347   61066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 04:14:20.911558   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.912049   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:20.912081   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:20.912214   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:20.995002   61066 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 04:14:20.999993   61066 info.go:137] Remote host: Buildroot 2025.02
	I1219 04:14:21.000015   61066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/addons for local assets ...
	I1219 04:14:21.000093   61066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5010/.minikube/files for local assets ...
	I1219 04:14:21.000225   61066 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem -> 89372.pem in /etc/ssl/certs
	I1219 04:14:21.000345   61066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 04:14:21.011859   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /etc/ssl/certs/89372.pem (1708 bytes)
	I1219 04:14:21.044486   61066 start.go:296] duration metric: took 136.195131ms for postStartSetup
	I1219 04:14:21.044529   61066 fix.go:56] duration metric: took 14.62754292s for fixHost
	I1219 04:14:21.047285   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.047669   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:21.047697   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.047883   61066 main.go:144] libmachine: Using SSH client type: native
	I1219 04:14:21.048095   61066 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.70 22 <nil> <nil>}
	I1219 04:14:21.048112   61066 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 04:14:21.154669   61066 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766117661.109413285
	
	I1219 04:14:21.154689   61066 fix.go:216] guest clock: 1766117661.109413285
	I1219 04:14:21.154697   61066 fix.go:229] Guest: 2025-12-19 04:14:21.109413285 +0000 UTC Remote: 2025-12-19 04:14:21.04453285 +0000 UTC m=+14.732482606 (delta=64.880435ms)
	I1219 04:14:21.154716   61066 fix.go:200] guest clock delta is within tolerance: 64.880435ms
	I1219 04:14:21.154729   61066 start.go:83] releasing machines lock for "newest-cni-509532", held for 14.737760627s
	I1219 04:14:21.157999   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.158406   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:21.158446   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.159177   61066 ssh_runner.go:195] Run: cat /version.json
	I1219 04:14:21.159277   61066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 04:14:21.162317   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.162712   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.162824   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:21.162859   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.163054   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:21.163287   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:21.163320   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:21.163501   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:21.240694   61066 ssh_runner.go:195] Run: systemctl --version
	I1219 04:14:21.273827   61066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 04:14:21.422462   61066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 04:14:21.428763   61066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 04:14:21.428831   61066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 04:14:21.447713   61066 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 04:14:21.447735   61066 start.go:496] detecting cgroup driver to use...
	I1219 04:14:21.447788   61066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 04:14:21.467586   61066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 04:14:21.484309   61066 docker.go:218] disabling cri-docker service (if available) ...
	I1219 04:14:21.484377   61066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 04:14:21.500884   61066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 04:14:21.516934   61066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 04:14:21.663592   61066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 04:14:21.886439   61066 docker.go:234] disabling docker service ...
	I1219 04:14:21.886499   61066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 04:14:21.902373   61066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 04:14:21.916305   61066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 04:14:22.098945   61066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 04:14:22.243790   61066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 04:14:22.258649   61066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 04:14:22.280345   61066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 04:14:22.280436   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.293096   61066 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1219 04:14:22.293154   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.304967   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.317195   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.329451   61066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 04:14:22.342541   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.354632   61066 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.376253   61066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 04:14:22.388591   61066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 04:14:22.399129   61066 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 04:14:22.399179   61066 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 04:14:22.418823   61066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 04:14:22.431040   61066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 04:14:22.578462   61066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 04:14:22.691413   61066 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 04:14:22.691504   61066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 04:14:22.696926   61066 start.go:564] Will wait 60s for crictl version
	I1219 04:14:22.696992   61066 ssh_runner.go:195] Run: which crictl
	I1219 04:14:22.700936   61066 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 04:14:22.737311   61066 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1219 04:14:22.737400   61066 ssh_runner.go:195] Run: crio --version
	I1219 04:14:22.764722   61066 ssh_runner.go:195] Run: crio --version
	I1219 04:14:22.794640   61066 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.29.1 ...
	I1219 04:14:22.798427   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:22.798864   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:22.798888   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:22.799088   61066 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1219 04:14:22.803142   61066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 04:14:22.819541   61066 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1219 04:14:22.820459   61066 kubeadm.go:884] updating cluster {Name:newest-cni-509532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-rc.1 ClusterName:newest-cni-509532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: N
etwork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 04:14:22.820600   61066 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 04:14:22.820648   61066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 04:14:22.852144   61066 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1219 04:14:22.852235   61066 ssh_runner.go:195] Run: which lz4
	I1219 04:14:22.856631   61066 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1219 04:14:22.861114   61066 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1219 04:14:22.861147   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340598599 bytes)
	I1219 04:14:24.100811   61066 crio.go:462] duration metric: took 1.24424385s to copy over tarball
	I1219 04:14:24.100887   61066 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1219 04:14:25.642217   61066 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.541263209s)
	I1219 04:14:25.642254   61066 crio.go:469] duration metric: took 1.541416336s to extract the tarball
	I1219 04:14:25.642264   61066 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1219 04:14:25.680384   61066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 04:14:25.722028   61066 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 04:14:25.722055   61066 cache_images.go:86] Images are preloaded, skipping loading
	I1219 04:14:25.722063   61066 kubeadm.go:935] updating node { 192.168.61.70 8443 v1.35.0-rc.1 crio true true} ...
	I1219 04:14:25.722183   61066 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-509532 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-509532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 04:14:25.722277   61066 ssh_runner.go:195] Run: crio config
	I1219 04:14:25.769708   61066 cni.go:84] Creating CNI manager for ""
	I1219 04:14:25.769737   61066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 04:14:25.769764   61066 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1219 04:14:25.769793   61066 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.70 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-509532 NodeName:newest-cni-509532 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 04:14:25.769971   61066 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-509532"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.70"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.70"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 04:14:25.770093   61066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 04:14:25.783203   61066 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 04:14:25.783264   61066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 04:14:25.794507   61066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1219 04:14:25.813874   61066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 04:14:25.832656   61066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1219 04:14:25.851473   61066 ssh_runner.go:195] Run: grep 192.168.61.70	control-plane.minikube.internal$ /etc/hosts
	I1219 04:14:25.855283   61066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 04:14:25.868794   61066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 04:14:26.012641   61066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 04:14:26.033299   61066 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532 for IP: 192.168.61.70
	I1219 04:14:26.033319   61066 certs.go:195] generating shared ca certs ...
	I1219 04:14:26.033332   61066 certs.go:227] acquiring lock for ca certs: {Name:mk433b742ffd55233764aebe3c6f298e0d1f02ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 04:14:26.033472   61066 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key
	I1219 04:14:26.033510   61066 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key
	I1219 04:14:26.033526   61066 certs.go:257] generating profile certs ...
	I1219 04:14:26.033628   61066 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/client.key
	I1219 04:14:26.033688   61066 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/apiserver.key.91f2c6a6
	I1219 04:14:26.033722   61066 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/proxy-client.key
	I1219 04:14:26.033831   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem (1338 bytes)
	W1219 04:14:26.033863   61066 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937_empty.pem, impossibly tiny 0 bytes
	I1219 04:14:26.033872   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 04:14:26.033902   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/ca.pem (1082 bytes)
	I1219 04:14:26.033928   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/cert.pem (1123 bytes)
	I1219 04:14:26.033950   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/certs/key.pem (1679 bytes)
	I1219 04:14:26.033991   61066 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem (1708 bytes)
	I1219 04:14:26.034602   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 04:14:26.074451   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 04:14:26.106740   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 04:14:26.134229   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 04:14:26.161855   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 04:14:26.191298   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1219 04:14:26.220617   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 04:14:26.248595   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/newest-cni-509532/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 04:14:26.277651   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/certs/8937.pem --> /usr/share/ca-certificates/8937.pem (1338 bytes)
	I1219 04:14:26.304192   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/ssl/certs/89372.pem --> /usr/share/ca-certificates/89372.pem (1708 bytes)
	I1219 04:14:26.331489   61066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 04:14:26.359526   61066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 04:14:26.381074   61066 ssh_runner.go:195] Run: openssl version
	I1219 04:14:26.387536   61066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 04:14:26.398290   61066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 04:14:26.409385   61066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 04:14:26.414244   61066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 04:14:26.414281   61066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 04:14:26.421272   61066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 04:14:26.431473   61066 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 04:14:26.441908   61066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8937.pem
	I1219 04:14:26.453021   61066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8937.pem /etc/ssl/certs/8937.pem
	I1219 04:14:26.464301   61066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8937.pem
	I1219 04:14:26.469137   61066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:46 /usr/share/ca-certificates/8937.pem
	I1219 04:14:26.469186   61066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8937.pem
	I1219 04:14:26.475991   61066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 04:14:26.486849   61066 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8937.pem /etc/ssl/certs/51391683.0
	I1219 04:14:26.497751   61066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89372.pem
	I1219 04:14:26.509027   61066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89372.pem /etc/ssl/certs/89372.pem
	I1219 04:14:26.520212   61066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89372.pem
	I1219 04:14:26.525194   61066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:46 /usr/share/ca-certificates/89372.pem
	I1219 04:14:26.525249   61066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89372.pem
	I1219 04:14:26.532003   61066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 04:14:26.542354   61066 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89372.pem /etc/ssl/certs/3ec20f2e.0
	I1219 04:14:26.554029   61066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 04:14:26.558993   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 04:14:26.566192   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 04:14:26.572977   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 04:14:26.580715   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 04:14:26.587688   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 04:14:26.594505   61066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 04:14:26.601393   61066 kubeadm.go:401] StartCluster: {Name:newest-cni-509532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-rc.1 ClusterName:newest-cni-509532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 04:14:26.601491   61066 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 04:14:26.601531   61066 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 04:14:26.634723   61066 cri.go:92] found id: ""
	I1219 04:14:26.634795   61066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 04:14:26.646970   61066 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 04:14:26.646989   61066 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 04:14:26.647032   61066 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 04:14:26.657908   61066 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 04:14:26.659041   61066 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-509532" does not appear in /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 04:14:26.659677   61066 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-5010/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-509532" cluster setting kubeconfig missing "newest-cni-509532" context setting]
	I1219 04:14:26.660520   61066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 04:14:26.662741   61066 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 04:14:26.673645   61066 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.61.70
	I1219 04:14:26.673667   61066 kubeadm.go:1161] stopping kube-system containers ...
	I1219 04:14:26.673679   61066 cri.go:57] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1219 04:14:26.673730   61066 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 04:14:26.708336   61066 cri.go:92] found id: ""
	I1219 04:14:26.708403   61066 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1219 04:14:26.735368   61066 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 04:14:26.746710   61066 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 04:14:26.746732   61066 kubeadm.go:158] found existing configuration files:
	
	I1219 04:14:26.746773   61066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1219 04:14:26.756763   61066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 04:14:26.756825   61066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 04:14:26.767551   61066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1219 04:14:26.777603   61066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 04:14:26.777657   61066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 04:14:26.789616   61066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1219 04:14:26.799989   61066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 04:14:26.800043   61066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 04:14:26.811043   61066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1219 04:14:26.821685   61066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 04:14:26.821747   61066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 04:14:26.832490   61066 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 04:14:26.842910   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:26.899704   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:27.435741   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:27.687135   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:27.761434   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:27.848539   61066 api_server.go:52] waiting for apiserver process to appear ...
	I1219 04:14:27.848670   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:28.348883   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:28.848774   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:29.349337   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:29.848837   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:30.349757   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:30.384559   61066 api_server.go:72] duration metric: took 2.536027777s to wait for apiserver process to appear ...
	I1219 04:14:30.384596   61066 api_server.go:88] waiting for apiserver healthz status ...
	I1219 04:14:30.384624   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:32.236209   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 04:14:32.236242   61066 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 04:14:32.236259   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:32.301458   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 04:14:32.301485   61066 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 04:14:32.384678   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:32.390384   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 04:14:32.390423   61066 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 04:14:32.884690   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:32.894173   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 04:14:32.894197   61066 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 04:14:33.384837   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:33.395731   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 04:14:33.395765   61066 api_server.go:103] status: https://192.168.61.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 04:14:33.885453   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:33.890388   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 200:
	ok
	I1219 04:14:33.898178   61066 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 04:14:33.898202   61066 api_server.go:131] duration metric: took 3.513597679s to wait for apiserver health ...
	I1219 04:14:33.898212   61066 cni.go:84] Creating CNI manager for ""
	I1219 04:14:33.898219   61066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 04:14:33.899474   61066 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 04:14:33.900488   61066 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 04:14:33.923233   61066 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 04:14:33.972262   61066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 04:14:33.984776   61066 system_pods.go:59] 8 kube-system pods found
	I1219 04:14:33.984823   61066 system_pods.go:61] "coredns-7d764666f9-wt5mn" [1e1844bc-e4c0-493b-bbdf-017660625fb4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 04:14:33.984834   61066 system_pods.go:61] "etcd-newest-cni-509532" [668ecd06-0928-483a-b393-bae23e1269b9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 04:14:33.984855   61066 system_pods.go:61] "kube-apiserver-newest-cni-509532" [3cc26981-eaaf-4a54-ac65-5e98371efb21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 04:14:33.984865   61066 system_pods.go:61] "kube-controller-manager-newest-cni-509532" [38fb14a4-787e-490a-9049-21bf6733543b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 04:14:33.984879   61066 system_pods.go:61] "kube-proxy-k5ptq" [b2d52f71-bf33-4869-a7f5-d33183a19cce] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 04:14:33.984892   61066 system_pods.go:61] "kube-scheduler-newest-cni-509532" [53f913da-bb8f-4193-901b-272a4b77217c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 04:14:33.984904   61066 system_pods.go:61] "metrics-server-5d785b57d4-7sqzf" [0af927e7-5a60-42a7-adc5-638b0ac652c5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 04:14:33.984916   61066 system_pods.go:61] "storage-provisioner" [2154f643-f3b5-486f-bfc4-7355248590cd] Running
	I1219 04:14:33.984933   61066 system_pods.go:74] duration metric: took 12.647245ms to wait for pod list to return data ...
	I1219 04:14:33.984945   61066 node_conditions.go:102] verifying NodePressure condition ...
	I1219 04:14:33.993929   61066 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 04:14:33.993953   61066 node_conditions.go:123] node cpu capacity is 2
	I1219 04:14:33.993966   61066 node_conditions.go:105] duration metric: took 9.012349ms to run NodePressure ...
	I1219 04:14:33.994028   61066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 04:14:34.291614   61066 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 04:14:34.313097   61066 ops.go:34] apiserver oom_adj: -16
	I1219 04:14:34.313126   61066 kubeadm.go:602] duration metric: took 7.666128862s to restartPrimaryControlPlane
	I1219 04:14:34.313139   61066 kubeadm.go:403] duration metric: took 7.711753039s to StartCluster
	I1219 04:14:34.313159   61066 settings.go:142] acquiring lock: {Name:mkb131693a40b9aa50c302192272518c3561c861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 04:14:34.313257   61066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 04:14:34.315826   61066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5010/kubeconfig: {Name:mk464ac013e036429e360ed78fc04687ed75f83a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 04:14:34.316151   61066 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.61.70 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 04:14:34.316217   61066 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 04:14:34.316324   61066 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-509532"
	I1219 04:14:34.316354   61066 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-509532"
	I1219 04:14:34.316351   61066 addons.go:70] Setting default-storageclass=true in profile "newest-cni-509532"
	W1219 04:14:34.316364   61066 addons.go:248] addon storage-provisioner should already be in state true
	I1219 04:14:34.316368   61066 addons.go:70] Setting metrics-server=true in profile "newest-cni-509532"
	I1219 04:14:34.316376   61066 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-509532"
	I1219 04:14:34.316382   61066 addons.go:239] Setting addon metrics-server=true in "newest-cni-509532"
	W1219 04:14:34.316391   61066 addons.go:248] addon metrics-server should already be in state true
	I1219 04:14:34.316412   61066 addons.go:70] Setting dashboard=true in profile "newest-cni-509532"
	I1219 04:14:34.316468   61066 addons.go:239] Setting addon dashboard=true in "newest-cni-509532"
	W1219 04:14:34.316477   61066 addons.go:248] addon dashboard should already be in state true
	I1219 04:14:34.316494   61066 host.go:66] Checking if "newest-cni-509532" exists ...
	I1219 04:14:34.316398   61066 host.go:66] Checking if "newest-cni-509532" exists ...
	I1219 04:14:34.316426   61066 host.go:66] Checking if "newest-cni-509532" exists ...
	I1219 04:14:34.316354   61066 config.go:182] Loaded profile config "newest-cni-509532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 04:14:34.318473   61066 out.go:179] * Verifying Kubernetes components...
	I1219 04:14:34.319389   61066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 04:14:34.320092   61066 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 04:14:34.320109   61066 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 04:14:34.320758   61066 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 04:14:34.321246   61066 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 04:14:34.321284   61066 addons.go:239] Setting addon default-storageclass=true in "newest-cni-509532"
	W1219 04:14:34.321510   61066 addons.go:248] addon default-storageclass should already be in state true
	I1219 04:14:34.321535   61066 host.go:66] Checking if "newest-cni-509532" exists ...
	I1219 04:14:34.321897   61066 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 04:14:34.321913   61066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 04:14:34.322387   61066 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 04:14:34.322403   61066 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 04:14:34.323725   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.323987   61066 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 04:14:34.324003   61066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 04:14:34.324827   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:34.324873   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.325140   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:34.326238   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.326275   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.326828   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:34.326860   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.326860   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:34.326949   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.327058   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:34.327242   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:34.327975   61066 main.go:144] libmachine: domain newest-cni-509532 has defined MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.328324   61066 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:c3", ip: ""} in network mk-newest-cni-509532: {Iface:virbr3 ExpiryTime:2025-12-19 05:14:17 +0000 UTC Type:0 Mac:52:54:00:a0:99:c3 Iaid: IPaddr:192.168.61.70 Prefix:24 Hostname:newest-cni-509532 Clientid:01:52:54:00:a0:99:c3}
	I1219 04:14:34.328347   61066 main.go:144] libmachine: domain newest-cni-509532 has defined IP address 192.168.61.70 and MAC address 52:54:00:a0:99:c3 in network mk-newest-cni-509532
	I1219 04:14:34.328469   61066 sshutil.go:53] new ssh client: &{IP:192.168.61.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/newest-cni-509532/id_rsa Username:docker}
	I1219 04:14:34.587142   61066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 04:14:34.611731   61066 api_server.go:52] waiting for apiserver process to appear ...
	I1219 04:14:34.611822   61066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 04:14:34.634330   61066 api_server.go:72] duration metric: took 318.137827ms to wait for apiserver process to appear ...
	I1219 04:14:34.634361   61066 api_server.go:88] waiting for apiserver healthz status ...
	I1219 04:14:34.634385   61066 api_server.go:253] Checking apiserver healthz at https://192.168.61.70:8443/healthz ...
	I1219 04:14:34.640210   61066 api_server.go:279] https://192.168.61.70:8443/healthz returned 200:
	ok
	I1219 04:14:34.641463   61066 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 04:14:34.641480   61066 api_server.go:131] duration metric: took 7.111019ms to wait for apiserver health ...
	I1219 04:14:34.641487   61066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 04:14:34.644743   61066 system_pods.go:59] 8 kube-system pods found
	I1219 04:14:34.644776   61066 system_pods.go:61] "coredns-7d764666f9-wt5mn" [1e1844bc-e4c0-493b-bbdf-017660625fb4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 04:14:34.644789   61066 system_pods.go:61] "etcd-newest-cni-509532" [668ecd06-0928-483a-b393-bae23e1269b9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 04:14:34.644801   61066 system_pods.go:61] "kube-apiserver-newest-cni-509532" [3cc26981-eaaf-4a54-ac65-5e98371efb21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 04:14:34.644812   61066 system_pods.go:61] "kube-controller-manager-newest-cni-509532" [38fb14a4-787e-490a-9049-21bf6733543b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 04:14:34.644821   61066 system_pods.go:61] "kube-proxy-k5ptq" [b2d52f71-bf33-4869-a7f5-d33183a19cce] Running
	I1219 04:14:34.644837   61066 system_pods.go:61] "kube-scheduler-newest-cni-509532" [53f913da-bb8f-4193-901b-272a4b77217c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 04:14:34.644847   61066 system_pods.go:61] "metrics-server-5d785b57d4-7sqzf" [0af927e7-5a60-42a7-adc5-638b0ac652c5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 04:14:34.644859   61066 system_pods.go:61] "storage-provisioner" [2154f643-f3b5-486f-bfc4-7355248590cd] Running
	I1219 04:14:34.644867   61066 system_pods.go:74] duration metric: took 3.373739ms to wait for pod list to return data ...
	I1219 04:14:34.644878   61066 default_sa.go:34] waiting for default service account to be created ...
	I1219 04:14:34.647226   61066 default_sa.go:45] found service account: "default"
	I1219 04:14:34.647247   61066 default_sa.go:55] duration metric: took 2.35291ms for default service account to be created ...
	I1219 04:14:34.647260   61066 kubeadm.go:587] duration metric: took 331.072692ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 04:14:34.647286   61066 node_conditions.go:102] verifying NodePressure condition ...
	I1219 04:14:34.649136   61066 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 04:14:34.649158   61066 node_conditions.go:123] node cpu capacity is 2
	I1219 04:14:34.649171   61066 node_conditions.go:105] duration metric: took 1.875766ms to run NodePressure ...
	I1219 04:14:34.649184   61066 start.go:242] waiting for startup goroutines ...
	I1219 04:14:34.684661   61066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 04:14:34.690440   61066 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 04:14:34.690464   61066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 04:14:34.703173   61066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 04:14:34.737761   61066 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 04:14:34.737791   61066 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 04:14:34.790265   61066 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 04:14:34.790287   61066 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 04:14:34.852757   61066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 04:14:34.887897   61066 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 04:14:36.013051   61066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.328350909s)
	I1219 04:14:36.013133   61066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.309931889s)
	I1219 04:14:36.111178   61066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.25838056s)
	I1219 04:14:36.111204   61066 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.223278971s)
	I1219 04:14:36.111222   61066 addons.go:500] Verifying addon metrics-server=true in "newest-cni-509532"
	I1219 04:14:36.111276   61066 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 04:14:36.114770   61066 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 04:14:36.989143   61066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 04:14:40.311413   61066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.322216791s)
	I1219 04:14:40.311501   61066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 04:14:40.700323   61066 addons.go:500] Verifying addon dashboard=true in "newest-cni-509532"
	I1219 04:14:40.703308   61066 out.go:179] * Verifying dashboard addon...
	I1219 04:14:40.705388   61066 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 04:14:40.714051   61066 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 04:14:40.714067   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:41.214289   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:41.709381   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:42.208940   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:42.711074   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:43.209100   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:43.709687   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:44.209381   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:44.709033   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:45.208335   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:45.708776   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:46.209886   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:46.708530   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:47.209371   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:47.708645   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:48.209250   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:48.708911   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:49.208441   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:49.709372   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:50.209545   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:50.708944   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:51.208438   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:51.709022   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:52.208662   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:52.709170   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:53.209170   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:53.709621   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:54.209235   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:54.708902   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:55.208961   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:55.708819   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:56.209635   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:56.709369   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:57.208990   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:57.709114   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:58.209155   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:58.708556   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:59.208920   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:14:59.709099   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:00.208668   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:00.709308   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:01.208791   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:01.709282   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:02.208969   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:02.709020   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:03.209562   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:03.709818   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:04.209394   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:04.710095   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:05.208341   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:05.708877   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:06.209468   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:06.709021   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:07.208884   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:07.710798   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:08.209944   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:08.709151   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:09.209372   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:09.709439   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:10.210196   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:10.709268   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:11.209953   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:11.708633   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:12.209488   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:12.709557   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:13.209528   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:13.710269   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:14.208719   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:14.709683   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:15.209748   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:15.710466   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:16.209094   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:16.708900   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:17.210178   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:17.709320   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:18.208709   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:18.711788   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:19.209147   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:19.709274   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:20.215927   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:20.709487   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:21.209636   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:21.709453   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:22.209104   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:22.709403   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:23.209951   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:23.709366   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:24.208821   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:24.709494   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:25.209361   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:25.709820   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:26.210263   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:26.708770   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:27.209796   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:27.710441   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:28.210538   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:28.709362   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:29.208745   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:29.713247   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:30.209128   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:30.709079   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:31.209001   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:31.709304   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:32.208985   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:32.708946   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:33.208932   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:33.709461   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:34.211211   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:34.710234   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:35.209227   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:35.709023   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:36.208843   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:36.708561   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:37.209466   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:37.710118   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:38.210715   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:38.709625   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:39.209486   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:39.709309   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:40.209102   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:40.708785   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:41.209503   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:41.709006   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:42.210327   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:42.709654   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:43.209327   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:43.709108   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:44.210491   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:44.709518   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:45.209472   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:45.709105   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:46.209051   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:46.709227   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:47.209758   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:47.709152   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:48.208757   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:48.709591   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:49.208784   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:49.709224   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:50.209656   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:50.709222   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:51.208915   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:51.709281   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:52.209437   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:52.709067   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:53.209388   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:53.709821   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:54.210256   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:54.709004   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:55.210468   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:55.708503   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:56.210298   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:56.708960   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:57.209547   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:57.709509   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:58.209519   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:58.709279   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:59.209362   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:15:59.708363   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:00.209110   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:00.708846   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:01.209401   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:01.709242   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:02.209610   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:02.708360   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:03.209720   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:03.708485   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:04.208731   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:04.709494   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:05.208815   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:05.708950   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:06.211916   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:06.708827   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:07.209434   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:07.708859   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:08.209971   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:08.709487   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:09.208814   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:09.709339   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:10.209693   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:10.709073   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:11.208882   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:11.709587   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:12.216297   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:12.708620   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:13.209710   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:13.710293   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:14.209030   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:14.709846   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:15.209755   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:15.708775   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:16.209650   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:16.710182   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:17.208561   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:17.709020   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:18.209752   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:18.709934   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:19.208768   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:19.709685   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:20.211473   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:20.708882   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:21.209970   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:21.709072   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:22.209763   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:22.709161   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:23.209199   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:23.709476   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:24.209259   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:24.708905   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:25.210557   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:25.709447   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:26.209744   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:26.709864   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:27.209781   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:27.710207   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:28.209976   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:28.709670   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:29.209701   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:29.709229   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:30.209362   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:30.708762   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:31.209196   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:31.709242   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:32.210131   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:32.708822   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:33.209731   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:33.710255   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:34.209751   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:34.709687   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:35.209508   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:35.709380   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:36.209299   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:36.710415   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:37.208972   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:37.709755   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:38.210386   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:38.708945   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:39.209705   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:39.709625   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:40.209957   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:40.709140   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:41.209723   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:41.709186   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:42.209494   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:42.708817   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:43.208986   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:43.710319   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:44.209078   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:44.708386   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:45.209690   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:45.709034   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:46.208833   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:46.709451   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:47.209201   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:47.709554   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:48.209559   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:48.709724   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:49.209297   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:49.708616   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:50.209756   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:50.708769   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:51.209737   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:51.709288   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:52.210762   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:52.709462   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:53.208546   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:53.708920   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:54.209535   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:54.708776   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:55.209801   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:55.710359   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:56.209200   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:56.708922   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:57.209223   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:57.708773   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:58.210524   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:58.710068   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:59.209268   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:16:59.708786   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:00.209738   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:00.709423   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:01.208944   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:01.709388   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:02.210192   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:02.708971   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:03.209341   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:03.710005   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:04.210688   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:04.709305   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:05.209187   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:05.710432   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:06.209490   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:06.710363   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:07.208742   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:07.709481   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:08.210292   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:08.708605   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:09.208834   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:09.709251   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:10.208873   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:10.709907   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:11.209307   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:11.709408   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:12.209896   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:12.708621   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:13.209743   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:13.710231   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:14.208736   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:14.709905   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:15.209118   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:15.709174   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:16.209391   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:16.709719   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:17.209628   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:17.709695   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:18.209845   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:18.708781   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:19.209135   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:19.709507   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:20.208841   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:20.710990   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:21.208724   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:21.710158   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:22.209103   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:22.710129   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:23.209775   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:23.709987   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:24.209316   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:24.709881   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:25.208928   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:25.708559   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:26.210028   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:26.710617   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:27.209052   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:27.708805   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:28.208631   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:28.710309   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:29.209622   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:29.709948   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:30.208964   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:30.709728   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:31.209967   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:31.709109   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:32.209521   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:32.709659   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:33.210734   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:33.710628   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:34.209345   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:34.711684   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:35.208987   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:35.708456   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:36.210082   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:36.709231   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:37.209017   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:37.708542   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:38.209781   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:38.709533   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:39.209334   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:39.708705   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:40.209760   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:40.710565   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:41.209403   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:41.709166   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:42.209605   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:42.710227   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:43.209790   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:43.709155   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:44.209710   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:44.709316   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:45.209305   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:45.708751   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:46.210380   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:46.709861   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:47.209176   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:47.710298   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:48.209488   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:48.709793   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:49.209720   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:49.709597   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:50.210321   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:50.710068   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:51.209343   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:51.709456   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:52.209315   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:52.710321   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:53.208905   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:53.708513   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:54.209522   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:54.710225   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:55.208988   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:55.708532   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:56.210100   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:56.709278   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:57.209475   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:57.709488   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:58.208995   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:58.709129   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:59.208642   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:17:59.709554   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:00.208833   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:00.709059   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:01.208813   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:01.709544   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:02.209386   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:02.709894   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:03.208830   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:03.710642   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:04.211112   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:04.709433   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:05.209336   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:05.710592   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:06.209933   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:06.709783   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:07.210824   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:07.709373   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:08.209713   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:08.709973   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:09.210300   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:09.709168   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:10.209281   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:10.709930   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:11.209444   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:11.710104   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:12.209049   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:12.709115   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:13.209977   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:13.710151   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:14.208688   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:14.709141   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:15.209021   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:15.708452   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:16.209185   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:16.709198   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:17.209195   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:17.709792   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:18.209561   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:18.709240   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:19.208680   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:19.709015   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:20.209289   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:20.709399   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:21.209051   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:21.709154   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:22.208461   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:22.709706   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:23.209290   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:23.709743   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:24.209679   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:24.710029   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:25.208905   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:25.708479   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:26.209007   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:26.708715   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:27.208750   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:27.710030   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:28.209759   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:28.710391   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:29.209501   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:29.709255   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:30.209201   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:30.708792   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:31.209051   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:31.708994   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:32.209533   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:32.709937   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:33.212472   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:33.710049   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:34.209935   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:34.710728   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:35.209382   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:35.709521   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:36.210540   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:36.709116   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:37.208525   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:37.710206   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:38.208770   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:38.710820   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:39.209432   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:39.708666   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:40.208960   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:40.708728   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:41.209837   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:41.707945   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:42.208777   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:42.709959   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:43.209133   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:43.710016   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:44.208324   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:44.709696   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:45.209211   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:45.708952   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:46.208698   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:46.708996   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:47.208818   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:47.709298   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:48.209317   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:48.709115   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:49.208699   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:49.709270   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:50.209097   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:50.708668   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:51.210090   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:51.708758   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:52.209177   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:52.708881   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:53.208317   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:53.709271   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:54.208078   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:54.708907   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:55.208145   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:55.709497   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:56.208830   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:56.708202   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:57.208293   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:57.708932   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:58.208754   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:58.708886   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:59.208527   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:18:59.709877   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:19:00.208365   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:19:00.708627   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 04:19:01.208963   61066 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	
	
	==> CRI-O <==
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.825715665Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117943825692991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136931,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f089e8e4-754c-4a02-aa33-9adc9d854d53 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.826730627Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fad0bdc6-e519-419e-8662-ce54c287c34e name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.826870651Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fad0bdc6-e519-419e-8662-ce54c287c34e name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.827050520Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26e6ee81f646c72c850b70ecca35f7adee442aecb0b43b306d3cdeee6026f584,PodSandboxId:8345a5965eace8c7c7cc6f9373bab32a2a8314386042f84afb02313b2048d9b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116522347035725,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1d1888-a950-48d5-9b73-440e7556818b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2caca388dcae558e2cf45b546d63b95993ed6f874effdb03ad5e5911235916da,PodSandboxId:4f012f54328ca4f5304cbd68bdc8bbdfa10180a294a8b12faebe143bbe3f41ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116502259145777,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca7cb580-0d7c-401c-8d64-c5bb86760477,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b125b6cafad9798dcaaa4e59d74adde5911caa574d34924fdd4022bbe679a,PodSandboxId:6925fe331fb33829ee17d854417078bf77251255616425c7cf53183387396a23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766116498385943301,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dnfcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8ee66a7-b129-4499-aad8-a988ecea241c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9a2aa1cbfadbd847de6cbbc990670ed05c34778fbe25ce4972a4762b0e5a14,PodSandboxId:5cce2c4d9c72d6c8364f4acb895c868c282b9883caf7d7d2964785eff615dc54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766116491940005380,Labels:map[string]string{io.kuber
netes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zs4wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2212782c-32ab-4355-8dda-9117953b0223,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7628d157bc011d749b031c83d671751b03a44c981e27b8e1cea9a8de01cb72,PodSandboxId:8345a5965eace8c7c7cc6f9373bab32a2a8314386042f84afb02313b2048d9b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116491309986286,Labels:map[string]string{io.kubernetes.container.name: sto
rage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1d1888-a950-48d5-9b73-440e7556818b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7b7fbe883018669fcd7b491b5ffd6260c3c94de5b5212ff255578f1329f11cc,PodSandboxId:4983aae39c7cb6080ad85fdd5a206a99db267bc8a565800f67829f7fd83d31d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766116486801027255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e108fdf17cc8f56b9a0db5bf73d6cb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a59d170b8ca5d86fcf9375544c790fe7c5d51034aff8c9daaf79524ddd0f122,PodSandboxId:1b2fac58f6e514929f7ca9aaa298e393cfd7bc1493f5586e7ad68076fe6a45b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:176611
6486784784284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8cfd1777a85573c752d6ea56606951,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47bef8ab5b6819f147cc9370436632c70fad252c4b7755f0240da10cf676d8a,PodSandboxId:1cf243a559ad56b72baae71b02b1497fc03615938731b5ebfe2ac4fe8f830c3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766116486737925173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55409b5058745059effbb614ceaaeabc,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1efb3e359c44d7879c1ca0d876346921845fcdd8a36efd9bb646db8301f8f5f,PodSandboxId:a1b44c869cec4ea46ff8e37c6fcca45502228a99c6b37d1f8a867f6427e6bb89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:
5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766116486704823215,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d023f6143e50cea3e3be2ba5b8c07a72,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fad0bdc6-e519-419e-8662-ce54c287c34e name=/runtime.v
1.RuntimeService/ListContainers
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.860158805Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f6062842-7430-44dc-a8b2-06fc1216f79f name=/runtime.v1.RuntimeService/Version
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.860268478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f6062842-7430-44dc-a8b2-06fc1216f79f name=/runtime.v1.RuntimeService/Version
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.861500787Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d6d5870-f81f-4cc4-b755-a859dee3df14 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.861883134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117943861864741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136931,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d6d5870-f81f-4cc4-b755-a859dee3df14 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.862586206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55eb113e-f114-4aba-bf41-ecdb717fdeef name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.862646993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55eb113e-f114-4aba-bf41-ecdb717fdeef name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.862823003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26e6ee81f646c72c850b70ecca35f7adee442aecb0b43b306d3cdeee6026f584,PodSandboxId:8345a5965eace8c7c7cc6f9373bab32a2a8314386042f84afb02313b2048d9b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116522347035725,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1d1888-a950-48d5-9b73-440e7556818b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2caca388dcae558e2cf45b546d63b95993ed6f874effdb03ad5e5911235916da,PodSandboxId:4f012f54328ca4f5304cbd68bdc8bbdfa10180a294a8b12faebe143bbe3f41ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116502259145777,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca7cb580-0d7c-401c-8d64-c5bb86760477,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b125b6cafad9798dcaaa4e59d74adde5911caa574d34924fdd4022bbe679a,PodSandboxId:6925fe331fb33829ee17d854417078bf77251255616425c7cf53183387396a23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766116498385943301,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dnfcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8ee66a7-b129-4499-aad8-a988ecea241c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9a2aa1cbfadbd847de6cbbc990670ed05c34778fbe25ce4972a4762b0e5a14,PodSandboxId:5cce2c4d9c72d6c8364f4acb895c868c282b9883caf7d7d2964785eff615dc54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766116491940005380,Labels:map[string]string{io.kuber
netes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zs4wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2212782c-32ab-4355-8dda-9117953b0223,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7628d157bc011d749b031c83d671751b03a44c981e27b8e1cea9a8de01cb72,PodSandboxId:8345a5965eace8c7c7cc6f9373bab32a2a8314386042f84afb02313b2048d9b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116491309986286,Labels:map[string]string{io.kubernetes.container.name: sto
rage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1d1888-a950-48d5-9b73-440e7556818b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7b7fbe883018669fcd7b491b5ffd6260c3c94de5b5212ff255578f1329f11cc,PodSandboxId:4983aae39c7cb6080ad85fdd5a206a99db267bc8a565800f67829f7fd83d31d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766116486801027255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e108fdf17cc8f56b9a0db5bf73d6cb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a59d170b8ca5d86fcf9375544c790fe7c5d51034aff8c9daaf79524ddd0f122,PodSandboxId:1b2fac58f6e514929f7ca9aaa298e393cfd7bc1493f5586e7ad68076fe6a45b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:176611
6486784784284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8cfd1777a85573c752d6ea56606951,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47bef8ab5b6819f147cc9370436632c70fad252c4b7755f0240da10cf676d8a,PodSandboxId:1cf243a559ad56b72baae71b02b1497fc03615938731b5ebfe2ac4fe8f830c3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766116486737925173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55409b5058745059effbb614ceaaeabc,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1efb3e359c44d7879c1ca0d876346921845fcdd8a36efd9bb646db8301f8f5f,PodSandboxId:a1b44c869cec4ea46ff8e37c6fcca45502228a99c6b37d1f8a867f6427e6bb89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:
5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766116486704823215,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d023f6143e50cea3e3be2ba5b8c07a72,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55eb113e-f114-4aba-bf41-ecdb717fdeef name=/runtime.v
1.RuntimeService/ListContainers
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.892383078Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d3328e9d-127d-4ba7-a2b9-ffb71bddf3e7 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.892533570Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d3328e9d-127d-4ba7-a2b9-ffb71bddf3e7 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.893364099Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e6b9e8f-2ad0-4319-a553-ce7387992355 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.893913948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117943893892198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136931,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e6b9e8f-2ad0-4319-a553-ce7387992355 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.894666182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efb2c75e-c2b5-448e-aba3-e1ae7fb7876b name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.894729391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efb2c75e-c2b5-448e-aba3-e1ae7fb7876b name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.894940952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26e6ee81f646c72c850b70ecca35f7adee442aecb0b43b306d3cdeee6026f584,PodSandboxId:8345a5965eace8c7c7cc6f9373bab32a2a8314386042f84afb02313b2048d9b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116522347035725,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1d1888-a950-48d5-9b73-440e7556818b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2caca388dcae558e2cf45b546d63b95993ed6f874effdb03ad5e5911235916da,PodSandboxId:4f012f54328ca4f5304cbd68bdc8bbdfa10180a294a8b12faebe143bbe3f41ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116502259145777,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca7cb580-0d7c-401c-8d64-c5bb86760477,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b125b6cafad9798dcaaa4e59d74adde5911caa574d34924fdd4022bbe679a,PodSandboxId:6925fe331fb33829ee17d854417078bf77251255616425c7cf53183387396a23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766116498385943301,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dnfcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8ee66a7-b129-4499-aad8-a988ecea241c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9a2aa1cbfadbd847de6cbbc990670ed05c34778fbe25ce4972a4762b0e5a14,PodSandboxId:5cce2c4d9c72d6c8364f4acb895c868c282b9883caf7d7d2964785eff615dc54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766116491940005380,Labels:map[string]string{io.kuber
netes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zs4wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2212782c-32ab-4355-8dda-9117953b0223,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7628d157bc011d749b031c83d671751b03a44c981e27b8e1cea9a8de01cb72,PodSandboxId:8345a5965eace8c7c7cc6f9373bab32a2a8314386042f84afb02313b2048d9b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116491309986286,Labels:map[string]string{io.kubernetes.container.name: sto
rage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1d1888-a950-48d5-9b73-440e7556818b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7b7fbe883018669fcd7b491b5ffd6260c3c94de5b5212ff255578f1329f11cc,PodSandboxId:4983aae39c7cb6080ad85fdd5a206a99db267bc8a565800f67829f7fd83d31d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766116486801027255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e108fdf17cc8f56b9a0db5bf73d6cb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a59d170b8ca5d86fcf9375544c790fe7c5d51034aff8c9daaf79524ddd0f122,PodSandboxId:1b2fac58f6e514929f7ca9aaa298e393cfd7bc1493f5586e7ad68076fe6a45b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:176611
6486784784284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8cfd1777a85573c752d6ea56606951,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47bef8ab5b6819f147cc9370436632c70fad252c4b7755f0240da10cf676d8a,PodSandboxId:1cf243a559ad56b72baae71b02b1497fc03615938731b5ebfe2ac4fe8f830c3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766116486737925173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55409b5058745059effbb614ceaaeabc,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1efb3e359c44d7879c1ca0d876346921845fcdd8a36efd9bb646db8301f8f5f,PodSandboxId:a1b44c869cec4ea46ff8e37c6fcca45502228a99c6b37d1f8a867f6427e6bb89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:
5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766116486704823215,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d023f6143e50cea3e3be2ba5b8c07a72,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efb2c75e-c2b5-448e-aba3-e1ae7fb7876b name=/runtime.v
1.RuntimeService/ListContainers
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.931551394Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28921245-2d40-4785-b668-9f6139e6e506 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.931618444Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28921245-2d40-4785-b668-9f6139e6e506 name=/runtime.v1.RuntimeService/Version
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.932782203Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f30bf8a9-27e4-422f-88b8-b00cc8fd7036 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.933383563Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766117943933361580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136931,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f30bf8a9-27e4-422f-88b8-b00cc8fd7036 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.934323699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b06f96d1-36e0-4075-844c-1afc084d48a6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.934475977Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b06f96d1-36e0-4075-844c-1afc084d48a6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 19 04:19:03 default-k8s-diff-port-168174 crio[893]: time="2025-12-19 04:19:03.934863304Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26e6ee81f646c72c850b70ecca35f7adee442aecb0b43b306d3cdeee6026f584,PodSandboxId:8345a5965eace8c7c7cc6f9373bab32a2a8314386042f84afb02313b2048d9b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766116522347035725,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1d1888-a950-48d5-9b73-440e7556818b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2caca388dcae558e2cf45b546d63b95993ed6f874effdb03ad5e5911235916da,PodSandboxId:4f012f54328ca4f5304cbd68bdc8bbdfa10180a294a8b12faebe143bbe3f41ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766116502259145777,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca7cb580-0d7c-401c-8d64-c5bb86760477,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b125b6cafad9798dcaaa4e59d74adde5911caa574d34924fdd4022bbe679a,PodSandboxId:6925fe331fb33829ee17d854417078bf77251255616425c7cf53183387396a23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766116498385943301,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dnfcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8ee66a7-b129-4499-aad8-a988ecea241c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9a2aa1cbfadbd847de6cbbc990670ed05c34778fbe25ce4972a4762b0e5a14,PodSandboxId:5cce2c4d9c72d6c8364f4acb895c868c282b9883caf7d7d2964785eff615dc54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766116491940005380,Labels:map[string]string{io.kuber
netes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zs4wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2212782c-32ab-4355-8dda-9117953b0223,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7628d157bc011d749b031c83d671751b03a44c981e27b8e1cea9a8de01cb72,PodSandboxId:8345a5965eace8c7c7cc6f9373bab32a2a8314386042f84afb02313b2048d9b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766116491309986286,Labels:map[string]string{io.kubernetes.container.name: sto
rage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1d1888-a950-48d5-9b73-440e7556818b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7b7fbe883018669fcd7b491b5ffd6260c3c94de5b5212ff255578f1329f11cc,PodSandboxId:4983aae39c7cb6080ad85fdd5a206a99db267bc8a565800f67829f7fd83d31d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766116486801027255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e108fdf17cc8f56b9a0db5bf73d6cb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a59d170b8ca5d86fcf9375544c790fe7c5d51034aff8c9daaf79524ddd0f122,PodSandboxId:1b2fac58f6e514929f7ca9aaa298e393cfd7bc1493f5586e7ad68076fe6a45b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:176611
6486784784284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8cfd1777a85573c752d6ea56606951,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47bef8ab5b6819f147cc9370436632c70fad252c4b7755f0240da10cf676d8a,PodSandboxId:1cf243a559ad56b72baae71b02b1497fc03615938731b5ebfe2ac4fe8f830c3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766116486737925173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55409b5058745059effbb614ceaaeabc,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1efb3e359c44d7879c1ca0d876346921845fcdd8a36efd9bb646db8301f8f5f,PodSandboxId:a1b44c869cec4ea46ff8e37c6fcca45502228a99c6b37d1f8a867f6427e6bb89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:
5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766116486704823215,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-168174,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d023f6143e50cea3e3be2ba5b8c07a72,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b06f96d1-36e0-4075-844c-1afc084d48a6 name=/runtime.v
1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	26e6ee81f646c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      23 minutes ago      Running             storage-provisioner       3                   8345a5965eace       storage-provisioner                                    kube-system
	2caca388dcae5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   24 minutes ago      Running             busybox                   1                   4f012f54328ca       busybox                                                default
	ca1b125b6cafa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      24 minutes ago      Running             coredns                   1                   6925fe331fb33       coredns-66bc5c9577-dnfcc                               kube-system
	1a9a2aa1cbfad       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      24 minutes ago      Running             kube-proxy                1                   5cce2c4d9c72d       kube-proxy-zs4wg                                       kube-system
	5e7628d157bc0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      24 minutes ago      Exited              storage-provisioner       2                   8345a5965eace       storage-provisioner                                    kube-system
	a7b7fbe883018       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      24 minutes ago      Running             etcd                      1                   4983aae39c7cb       etcd-default-k8s-diff-port-168174                      kube-system
	5a59d170b8ca5       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      24 minutes ago      Running             kube-apiserver            1                   1b2fac58f6e51       kube-apiserver-default-k8s-diff-port-168174            kube-system
	c47bef8ab5b68       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      24 minutes ago      Running             kube-scheduler            1                   1cf243a559ad5       kube-scheduler-default-k8s-diff-port-168174            kube-system
	f1efb3e359c44       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      24 minutes ago      Running             kube-controller-manager   1                   a1b44c869cec4       kube-controller-manager-default-k8s-diff-port-168174   kube-system
	
	
	==> coredns [ca1b125b6cafad9798dcaaa4e59d74adde5911caa574d34924fdd4022bbe679a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57723 - 29016 "HINFO IN 3001237849225172108.7414532178602150098. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02497374s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-168174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-168174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=default-k8s-diff-port-168174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_51_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:51:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-168174
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 04:19:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 04:15:14 +0000   Fri, 19 Dec 2025 03:51:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 04:15:14 +0000   Fri, 19 Dec 2025 03:51:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 04:15:14 +0000   Fri, 19 Dec 2025 03:51:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 04:15:14 +0000   Fri, 19 Dec 2025 03:54:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.68
	  Hostname:    default-k8s-diff-port-168174
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 5503b0a81398475db625563c5bc2d168
	  System UUID:                5503b0a8-1398-475d-b625-563c5bc2d168
	  Boot ID:                    ec7dc5a0-c588-4c8b-b9bc-28aeb7330fb9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 coredns-66bc5c9577-dnfcc                                 100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     27m
	  kube-system                 etcd-default-k8s-diff-port-168174                        100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         27m
	  kube-system                 kube-apiserver-default-k8s-diff-port-168174              250m (12%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-168174     200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-zs4wg                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-default-k8s-diff-port-168174              100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 metrics-server-746fcd58dc-xjkbx                          100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         26m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kubernetes-dashboard        kubernetes-dashboard-api-7ddd685bb4-kxd2m                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    24m
	  kubernetes-dashboard        kubernetes-dashboard-auth-548df69c79-p9fml               100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    24m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-rjxnf                0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh    100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    24m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-68g4g                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1250m (62%)   1 (50%)
	  memory             1170Mi (39%)  1770Mi (59%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26m                kube-proxy       
	  Normal   Starting                 24m                kube-proxy       
	  Normal   NodeHasSufficientMemory  27m                kubelet          Node default-k8s-diff-port-168174 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    27m                kubelet          Node default-k8s-diff-port-168174 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27m                kubelet          Node default-k8s-diff-port-168174 status is now: NodeHasSufficientPID
	  Normal   Starting                 27m                kubelet          Starting kubelet.
	  Normal   NodeReady                27m                kubelet          Node default-k8s-diff-port-168174 status is now: NodeReady
	  Normal   RegisteredNode           27m                node-controller  Node default-k8s-diff-port-168174 event: Registered Node default-k8s-diff-port-168174 in Controller
	  Normal   Starting                 24m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node default-k8s-diff-port-168174 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node default-k8s-diff-port-168174 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node default-k8s-diff-port-168174 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 24m                kubelet          Node default-k8s-diff-port-168174 has been rebooted, boot id: ec7dc5a0-c588-4c8b-b9bc-28aeb7330fb9
	  Normal   RegisteredNode           24m                node-controller  Node default-k8s-diff-port-168174 event: Registered Node default-k8s-diff-port-168174 in Controller
	
	
	==> dmesg <==
	[Dec19 03:54] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000203] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.775225] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088691] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.100726] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.380364] kauditd_printk_skb: 168 callbacks suppressed
	[  +0.000108] kauditd_printk_skb: 182 callbacks suppressed
	[Dec19 03:55] kauditd_printk_skb: 291 callbacks suppressed
	[ +12.029718] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [a7b7fbe883018669fcd7b491b5ffd6260c3c94de5b5212ff255578f1329f11cc] <==
	{"level":"warn","ts":"2025-12-19T03:55:23.513165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.534618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.546175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.563928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.574673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.591079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.604034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.617917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.630551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.647724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:55:23.659369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35306","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T04:04:48.130463Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1035}
	{"level":"info","ts":"2025-12-19T04:04:48.155264Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1035,"took":"24.333192ms","hash":1277573185,"current-db-size-bytes":4239360,"current-db-size":"4.2 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-12-19T04:04:48.155315Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1277573185,"revision":1035,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T04:09:48.136857Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1380}
	{"level":"info","ts":"2025-12-19T04:09:48.141957Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1380,"took":"4.271902ms","hash":14277058,"current-db-size-bytes":4239360,"current-db-size":"4.2 MB","current-db-size-in-use-bytes":2764800,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2025-12-19T04:09:48.141986Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":14277058,"revision":1380,"compact-revision":1035}
	{"level":"info","ts":"2025-12-19T04:09:54.218147Z","caller":"traceutil/trace.go:172","msg":"trace[1078571965] transaction","detail":"{read_only:false; response_revision:1801; number_of_response:1; }","duration":"129.558788ms","start":"2025-12-19T04:09:54.088540Z","end":"2025-12-19T04:09:54.218099Z","steps":["trace[1078571965] 'process raft request'  (duration: 129.368425ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T04:12:30.356537Z","caller":"traceutil/trace.go:172","msg":"trace[287367462] transaction","detail":"{read_only:false; response_revision:1978; number_of_response:1; }","duration":"276.715886ms","start":"2025-12-19T04:12:30.079778Z","end":"2025-12-19T04:12:30.356494Z","steps":["trace[287367462] 'process raft request'  (duration: 276.469533ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T04:12:30.361055Z","caller":"traceutil/trace.go:172","msg":"trace[1105878281] transaction","detail":"{read_only:false; response_revision:1979; number_of_response:1; }","duration":"279.373014ms","start":"2025-12-19T04:12:30.081668Z","end":"2025-12-19T04:12:30.361041Z","steps":["trace[1105878281] 'process raft request'  (duration: 279.293682ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T04:14:28.035684Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.656552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T04:14:28.036220Z","caller":"traceutil/trace.go:172","msg":"trace[1498464842] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2079; }","duration":"103.270358ms","start":"2025-12-19T04:14:27.932933Z","end":"2025-12-19T04:14:28.036204Z","steps":["trace[1498464842] 'range keys from in-memory index tree'  (duration: 101.592383ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T04:14:48.143062Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1794}
	{"level":"info","ts":"2025-12-19T04:14:48.147125Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1794,"took":"3.726957ms","hash":1510204538,"current-db-size-bytes":4239360,"current-db-size":"4.2 MB","current-db-size-in-use-bytes":2670592,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2025-12-19T04:14:48.147174Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1510204538,"revision":1794,"compact-revision":1380}
	
	
	==> kernel <==
	 04:19:04 up 24 min,  0 users,  load average: 0.23, 0.23, 0.20
	Linux default-k8s-diff-port-168174 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [5a59d170b8ca5d86fcf9375544c790fe7c5d51034aff8c9daaf79524ddd0f122] <==
	E1219 04:14:51.080568       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1219 04:14:51.080575       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 04:14:51.080580       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 04:14:51.081707       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:15:51.081642       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:15:51.081738       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 04:15:51.081769       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:15:51.081869       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:15:51.081898       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 04:15:51.083767       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:17:51.082949       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:17:51.083007       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 04:17:51.083026       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 04:17:51.084194       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 04:17:51.084253       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 04:17:51.084262       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f1efb3e359c44d7879c1ca0d876346921845fcdd8a36efd9bb646db8301f8f5f] <==
	I1219 04:12:55.053218       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:13:24.935920       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:13:25.061130       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:13:54.941868       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:13:55.071886       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:14:24.947390       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:14:25.085276       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:14:54.952508       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:14:55.092510       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:15:24.957073       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:15:25.102265       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:15:54.963115       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:15:55.111836       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:16:24.968815       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:16:25.121076       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:16:54.974068       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:16:55.131723       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:17:24.979454       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:17:25.139702       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:17:54.985008       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:17:55.147328       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:18:24.989969       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:18:25.156146       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 04:18:54.994579       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 04:18:55.164577       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [1a9a2aa1cbfadbd847de6cbbc990670ed05c34778fbe25ce4972a4762b0e5a14] <==
	I1219 03:54:52.411745       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:54:52.513125       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:54:52.513181       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.68"]
	E1219 03:54:52.513246       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:54:52.577645       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:54:52.577754       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:54:52.577785       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:54:52.614026       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:54:52.614219       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:54:52.614230       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:54:52.622559       1 config.go:200] "Starting service config controller"
	I1219 03:54:52.623023       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:54:52.623060       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:54:52.623068       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:54:52.623084       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:54:52.623089       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:54:52.629960       1 config.go:309] "Starting node config controller"
	I1219 03:54:52.629977       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:54:52.629985       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:54:52.724081       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:54:52.724118       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:54:52.724180       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c47bef8ab5b6819f147cc9370436632c70fad252c4b7755f0240da10cf676d8a] <==
	I1219 03:54:50.124356       1 serving.go:386] Generated self-signed cert in-memory
	I1219 03:54:50.387620       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:54:50.387644       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:54:50.393918       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:54:50.394016       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1219 03:54:50.394029       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1219 03:54:50.394045       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:54:50.401093       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:54:50.401132       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:54:50.401238       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:54:50.401261       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:54:50.494477       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1219 03:54:50.501889       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:54:50.501986       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 04:18:34 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:34.073657    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh" podUID="849bb739-c9a3-414f-8717-a34dddeafbbd"
	Dec 19 04:18:35 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:35.075599    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xjkbx" podUID="c6e2f2b2-7b94-4ff2-85ba-e79d72b30655"
	Dec 19 04:18:35 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:35.382200    1242 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 3.9 in docker.io/library/kong: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kong:3.9"
	Dec 19 04:18:35 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:35.382238    1242 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest 3.9 in docker.io/library/kong: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kong:3.9"
	Dec 19 04:18:35 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:35.382319    1242 kuberuntime_manager.go:1449] "Unhandled Error" err="init container clear-stale-pid start failed in pod kubernetes-dashboard-kong-9849c64bd-rjxnf_kubernetes-dashboard(e2a6d304-c063-4022-9046-9ad88d13e776): ErrImagePull: reading manifest 3.9 in docker.io/library/kong: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 19 04:18:35 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:35.382350    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ErrImagePull: \"reading manifest 3.9 in docker.io/library/kong: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-rjxnf" podUID="e2a6d304-c063-4022-9046-9ad88d13e776"
	Dec 19 04:18:35 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:35.407764    1242 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117915407188113  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:18:35 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:35.407826    1242 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117915407188113  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:18:40 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:40.074470    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-548df69c79-p9fml" podUID="c0e6acd2-48c2-4841-b6f3-227a34007c9a"
	Dec 19 04:18:43 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:43.076278    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: reading manifest 1.14.0 in docker.io/kubernetesui/dashboard-api: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-7ddd685bb4-kxd2m" podUID="6755fe02-aa19-47c7-84ac-fdbc589e9298"
	Dec 19 04:18:43 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:43.076607    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30 in docker.io/kubernetesui/dashboard-web: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-68g4g" podUID="7c826d2d-f354-48c5-b794-0bcd08b8d69d"
	Dec 19 04:18:45 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:45.409648    1242 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117925409290426  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:18:45 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:45.409669    1242 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117925409290426  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:18:48 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:48.074996    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh" podUID="849bb739-c9a3-414f-8717-a34dddeafbbd"
	Dec 19 04:18:48 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:48.076027    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xjkbx" podUID="c6e2f2b2-7b94-4ff2-85ba-e79d72b30655"
	Dec 19 04:18:49 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:49.075598    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: reading manifest 3.9 in docker.io/library/kong: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-rjxnf" podUID="e2a6d304-c063-4022-9046-9ad88d13e776"
	Dec 19 04:18:52 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:52.073924    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-548df69c79-p9fml" podUID="c0e6acd2-48c2-4841-b6f3-227a34007c9a"
	Dec 19 04:18:54 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:54.073749    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: reading manifest 1.14.0 in docker.io/kubernetesui/dashboard-api: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-7ddd685bb4-kxd2m" podUID="6755fe02-aa19-47c7-84ac-fdbc589e9298"
	Dec 19 04:18:55 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:55.411015    1242 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766117935410502604  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:18:55 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:55.411036    1242 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766117935410502604  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:136931}  inodes_used:{value:62}}"
	Dec 19 04:18:58 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:18:58.073126    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30 in docker.io/kubernetesui/dashboard-web: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-68g4g" podUID="7c826d2d-f354-48c5-b794-0bcd08b8d69d"
	Dec 19 04:19:00 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:19:00.073841    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: reading manifest 1.2.2 in docker.io/kubernetesui/dashboard-metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh" podUID="849bb739-c9a3-414f-8717-a34dddeafbbd"
	Dec 19 04:19:00 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:19:00.073857    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xjkbx" podUID="c6e2f2b2-7b94-4ff2-85ba-e79d72b30655"
	Dec 19 04:19:03 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:19:03.077885    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: reading manifest 1.4.0 in docker.io/kubernetesui/dashboard-auth: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-548df69c79-p9fml" podUID="c0e6acd2-48c2-4841-b6f3-227a34007c9a"
	Dec 19 04:19:04 default-k8s-diff-port-168174 kubelet[1242]: E1219 04:19:04.075125    1242 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: reading manifest 3.9 in docker.io/library/kong: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-rjxnf" podUID="e2a6d304-c063-4022-9046-9ad88d13e776"
	
	
	==> storage-provisioner [26e6ee81f646c72c850b70ecca35f7adee442aecb0b43b306d3cdeee6026f584] <==
	W1219 04:18:38.904042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:40.908212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:40.913003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:42.916569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:42.923478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:44.926117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:44.931321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:46.934314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:46.938470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:48.941219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:48.947807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:50.950453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:50.954608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:52.958325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:52.963368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:54.966255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:54.973221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:56.976026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:56.981155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:58.985145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:18:58.991683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:19:00.995026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:19:00.999747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:19:03.004267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 04:19:03.013279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [5e7628d157bc011d749b031c83d671751b03a44c981e27b8e1cea9a8de01cb72] <==
	I1219 03:54:51.390847       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:55:21.394375       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-168174 -n default-k8s-diff-port-168174
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-168174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-xjkbx kubernetes-dashboard-api-7ddd685bb4-kxd2m kubernetes-dashboard-auth-548df69c79-p9fml kubernetes-dashboard-kong-9849c64bd-rjxnf kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh kubernetes-dashboard-web-5c9f966b98-68g4g
helpers_test.go:283: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context default-k8s-diff-port-168174 describe pod metrics-server-746fcd58dc-xjkbx kubernetes-dashboard-api-7ddd685bb4-kxd2m kubernetes-dashboard-auth-548df69c79-p9fml kubernetes-dashboard-kong-9849c64bd-rjxnf kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh kubernetes-dashboard-web-5c9f966b98-68g4g
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-168174 describe pod metrics-server-746fcd58dc-xjkbx kubernetes-dashboard-api-7ddd685bb4-kxd2m kubernetes-dashboard-auth-548df69c79-p9fml kubernetes-dashboard-kong-9849c64bd-rjxnf kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh kubernetes-dashboard-web-5c9f966b98-68g4g: exit status 1 (62.651145ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-xjkbx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-api-7ddd685bb4-kxd2m" not found
	Error from server (NotFound): pods "kubernetes-dashboard-auth-548df69c79-p9fml" not found
	Error from server (NotFound): pods "kubernetes-dashboard-kong-9849c64bd-rjxnf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh" not found
	Error from server (NotFound): pods "kubernetes-dashboard-web-5c9f966b98-68g4g" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context default-k8s-diff-port-168174 describe pod metrics-server-746fcd58dc-xjkbx kubernetes-dashboard-api-7ddd685bb4-kxd2m kubernetes-dashboard-auth-548df69c79-p9fml kubernetes-dashboard-kong-9849c64bd-rjxnf kubernetes-dashboard-metrics-scraper-7685fd8b77-6s5wh kubernetes-dashboard-web-5c9f966b98-68g4g: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (541.93s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "df -t ext4 /data | grep /data"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "df -t ext4 /data | grep /data": context deadline exceeded (2.367µs)
iso_test.go:99: failed to verify existence of "/data" mount. args "out/minikube-linux-amd64 -p guest-783207 ssh \"df -t ext4 /data | grep /data\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//data (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker": context deadline exceeded (378ns)
iso_test.go:99: failed to verify existence of "/var/lib/docker" mount. args "out/minikube-linux-amd64 -p guest-783207 ssh \"df -t ext4 /var/lib/docker | grep /var/lib/docker\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/docker (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni": context deadline exceeded (463ns)
iso_test.go:99: failed to verify existence of "/var/lib/cni" mount. args "out/minikube-linux-amd64 -p guest-783207 ssh \"df -t ext4 /var/lib/cni | grep /var/lib/cni\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/cni (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet": context deadline exceeded (480ns)
iso_test.go:99: failed to verify existence of "/var/lib/kubelet" mount. args "out/minikube-linux-amd64 -p guest-783207 ssh \"df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/kubelet (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube": context deadline exceeded (459ns)
iso_test.go:99: failed to verify existence of "/var/lib/minikube" mount. args "out/minikube-linux-amd64 -p guest-783207 ssh \"df -t ext4 /var/lib/minikube | grep /var/lib/minikube\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/minikube (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox": context deadline exceeded (430ns)
iso_test.go:99: failed to verify existence of "/var/lib/toolbox" mount. args "out/minikube-linux-amd64 -p guest-783207 ssh \"df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/toolbox (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker": context deadline exceeded (441ns)
iso_test.go:99: failed to verify existence of "/var/lib/boot2docker" mount. args "out/minikube-linux-amd64 -p guest-783207 ssh \"df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/boot2docker (0.00s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "cat /version.json"
iso_test.go:106: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "cat /version.json": context deadline exceeded (474ns)
iso_test.go:108: failed to read /version.json. args "out/minikube-linux-amd64 -p guest-783207 ssh \"cat /version.json\"": context deadline exceeded
--- FAIL: TestISOImage/VersionJSON (0.00s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-783207 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
iso_test.go:125: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-783207 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'": context deadline exceeded (388ns)
iso_test.go:127: failed to verify existence of "/sys/kernel/btf/vmlinux" file: args "out/minikube-linux-amd64 -p guest-783207 ssh \"test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'\"": context deadline exceeded
iso_test.go:131: expected file "/sys/kernel/btf/vmlinux" to exist, but it does not. BTF types are required for CO-RE eBPF programs; set CONFIG_DEBUG_INFO_BTF in kernel configuration.
--- FAIL: TestISOImage/eBPFSupport (0.00s)
E1219 04:18:33.872490    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (336/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 22.06
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.3/json-events 9.56
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.07
18 TestDownloadOnly/v1.34.3/DeleteAll 0.15
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.35.0-rc.1/json-events 9.15
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.33
30 TestBinaryMirror 0.63
31 TestOffline 101.4
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 127.94
40 TestAddons/serial/GCPAuth/Namespaces 0.15
41 TestAddons/serial/GCPAuth/FakeCredentials 11.49
44 TestAddons/parallel/Registry 22.8
45 TestAddons/parallel/RegistryCreds 0.62
47 TestAddons/parallel/InspektorGadget 10.92
48 TestAddons/parallel/MetricsServer 6.15
50 TestAddons/parallel/CSI 54.14
51 TestAddons/parallel/Headlamp 20.66
52 TestAddons/parallel/CloudSpanner 6.58
53 TestAddons/parallel/LocalPath 55.77
54 TestAddons/parallel/NvidiaDevicePlugin 5.7
55 TestAddons/parallel/Yakd 11.01
57 TestAddons/StoppedEnableDisable 87.52
58 TestCertOptions 56.53
59 TestCertExpiration 257.4
61 TestForceSystemdFlag 38.82
62 TestForceSystemdEnv 55.97
67 TestErrorSpam/setup 34.14
68 TestErrorSpam/start 0.3
69 TestErrorSpam/status 0.62
70 TestErrorSpam/pause 1.43
71 TestErrorSpam/unpause 1.63
72 TestErrorSpam/stop 5.32
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 47.98
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 59.56
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.07
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.1
84 TestFunctional/serial/CacheCmd/cache/add_local 2.05
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.48
89 TestFunctional/serial/CacheCmd/cache/delete 0.11
90 TestFunctional/serial/MinikubeKubectlCmd 0.11
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 35.44
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.18
95 TestFunctional/serial/LogsFileCmd 1.19
96 TestFunctional/serial/InvalidService 3.82
98 TestFunctional/parallel/ConfigCmd 0.37
100 TestFunctional/parallel/DryRun 0.22
101 TestFunctional/parallel/InternationalLanguage 0.12
102 TestFunctional/parallel/StatusCmd 0.62
107 TestFunctional/parallel/AddonsCmd 0.14
108 TestFunctional/parallel/PersistentVolumeClaim 246.41
110 TestFunctional/parallel/SSHCmd 0.29
111 TestFunctional/parallel/CpCmd 1.08
112 TestFunctional/parallel/MySQL 264.43
113 TestFunctional/parallel/FileSync 0.16
114 TestFunctional/parallel/CertSync 0.93
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.31
122 TestFunctional/parallel/License 0.3
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
125 TestFunctional/parallel/ProfileCmd/profile_list 0.3
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
127 TestFunctional/parallel/MountCmd/any-port 38.03
128 TestFunctional/parallel/MountCmd/specific-port 1.41
129 TestFunctional/parallel/MountCmd/VerifyCleanup 1.23
139 TestFunctional/parallel/Version/short 0.06
140 TestFunctional/parallel/Version/components 0.4
141 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.17
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.18
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.17
145 TestFunctional/parallel/ImageCommands/ImageBuild 3.42
146 TestFunctional/parallel/ImageCommands/Setup 1.77
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.23
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.8
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.64
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.48
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.73
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
157 TestFunctional/parallel/ServiceCmd/List 2.39
158 TestFunctional/parallel/ServiceCmd/JSONOutput 2.38
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.01
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 80.59
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 31.17
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.04
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.07
176 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 3.26
177 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 2.01
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.17
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.39
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.12
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.11
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.11
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 32.07
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 0.06
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 1.22
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.2
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 4.38
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.41
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.22
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.11
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 0.64
200 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.14
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 239.84
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.29
204 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 0.98
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 287.81
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.16
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.2
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.06
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.32
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.3
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.4
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.31
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.3
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 69.86
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.16
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.08
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.18
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.18
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.18
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.19
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 3.55
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.83
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.26
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 0.81
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.62
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.48
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.45
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.75
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.57
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.06
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.4
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.07
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.07
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.07
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 2.41
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 2.4
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.01
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.01
261 TestMultiControlPlane/serial/StartCluster 205.54
262 TestMultiControlPlane/serial/DeployApp 6.71
263 TestMultiControlPlane/serial/PingHostFromPods 1.25
264 TestMultiControlPlane/serial/AddWorkerNode 43.24
265 TestMultiControlPlane/serial/NodeLabels 0.06
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.62
267 TestMultiControlPlane/serial/CopyFile 10.28
268 TestMultiControlPlane/serial/StopSecondaryNode 89.54
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.47
270 TestMultiControlPlane/serial/RestartSecondaryNode 36.49
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.79
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 374.8
273 TestMultiControlPlane/serial/DeleteSecondaryNode 17.8
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.48
275 TestMultiControlPlane/serial/StopCluster 252.4
276 TestMultiControlPlane/serial/RestartCluster 93.87
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.48
278 TestMultiControlPlane/serial/AddSecondaryNode 100.6
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.63
284 TestJSONOutput/start/Command 78.79
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.69
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.62
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 6.75
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.22
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 74
316 TestMountStart/serial/StartWithMountFirst 19.74
317 TestMountStart/serial/VerifyMountFirst 0.3
318 TestMountStart/serial/StartWithMountSecond 20.33
319 TestMountStart/serial/VerifyMountSecond 0.3
320 TestMountStart/serial/DeleteFirst 0.67
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.18
323 TestMountStart/serial/RestartStopped 18.58
324 TestMountStart/serial/VerifyMountPostStop 0.3
327 TestMultiNode/serial/FreshStart2Nodes 99.63
328 TestMultiNode/serial/DeployApp2Nodes 6.27
329 TestMultiNode/serial/PingHostFrom2Pods 0.83
330 TestMultiNode/serial/AddNode 41.5
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.45
333 TestMultiNode/serial/CopyFile 5.77
334 TestMultiNode/serial/StopNode 2.05
335 TestMultiNode/serial/StartAfterStop 37.95
336 TestMultiNode/serial/RestartKeepsNodes 287.45
337 TestMultiNode/serial/DeleteNode 2.45
338 TestMultiNode/serial/StopMultiNode 162.28
339 TestMultiNode/serial/RestartMultiNode 86.73
340 TestMultiNode/serial/ValidateNameConflict 40.42
347 TestScheduledStopUnix 107.32
351 TestRunningBinaryUpgrade 380.63
353 TestKubernetesUpgrade 158.67
358 TestStoppedBinaryUpgrade/Setup 3.25
359 TestStoppedBinaryUpgrade/Upgrade 114.13
364 TestNetworkPlugins/group/false 3.51
376 TestPause/serial/Start 90.64
377 TestStoppedBinaryUpgrade/MinikubeLogs 1.1
379 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
380 TestNoKubernetes/serial/StartWithK8s 55.56
381 TestNoKubernetes/serial/StartWithStopK8s 5.78
382 TestNoKubernetes/serial/Start 18.72
384 TestISOImage/Setup 29.39
385 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
386 TestNoKubernetes/serial/VerifyK8sNotRunning 0.15
387 TestNoKubernetes/serial/ProfileList 0.88
388 TestNoKubernetes/serial/Stop 1.22
389 TestNoKubernetes/serial/StartNoArgs 40.9
402 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
403 TestNetworkPlugins/group/auto/Start 70.11
404 TestNetworkPlugins/group/kindnet/Start 61.7
405 TestNetworkPlugins/group/auto/KubeletFlags 0.18
406 TestNetworkPlugins/group/auto/NetCatPod 11.25
407 TestNetworkPlugins/group/auto/DNS 0.15
408 TestNetworkPlugins/group/auto/Localhost 0.13
409 TestNetworkPlugins/group/auto/HairPin 0.12
410 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
411 TestNetworkPlugins/group/calico/Start 75.42
412 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
413 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
414 TestNetworkPlugins/group/custom-flannel/Start 112.71
415 TestNetworkPlugins/group/kindnet/DNS 0.14
416 TestNetworkPlugins/group/kindnet/Localhost 0.12
417 TestNetworkPlugins/group/kindnet/HairPin 0.11
418 TestNetworkPlugins/group/enable-default-cni/Start 90.53
419 TestNetworkPlugins/group/calico/ControllerPod 6.01
420 TestNetworkPlugins/group/calico/KubeletFlags 0.18
421 TestNetworkPlugins/group/calico/NetCatPod 11.24
422 TestNetworkPlugins/group/calico/DNS 0.2
423 TestNetworkPlugins/group/calico/Localhost 0.12
424 TestNetworkPlugins/group/calico/HairPin 0.12
425 TestNetworkPlugins/group/flannel/Start 78.59
426 TestNetworkPlugins/group/bridge/Start 89.88
427 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.18
428 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.22
429 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
430 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
431 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
432 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
433 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
434 TestNetworkPlugins/group/custom-flannel/DNS 0.15
435 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
436 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
438 TestStartStop/group/old-k8s-version/serial/FirstStart 96.46
440 TestStartStop/group/no-preload/serial/FirstStart 107.29
441 TestNetworkPlugins/group/flannel/ControllerPod 6.01
442 TestNetworkPlugins/group/flannel/KubeletFlags 0.18
443 TestNetworkPlugins/group/flannel/NetCatPod 11.25
444 TestNetworkPlugins/group/flannel/DNS 0.18
445 TestNetworkPlugins/group/flannel/Localhost 0.17
446 TestNetworkPlugins/group/flannel/HairPin 0.14
447 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
448 TestNetworkPlugins/group/bridge/NetCatPod 12.26
450 TestStartStop/group/embed-certs/serial/FirstStart 85.82
451 TestNetworkPlugins/group/bridge/DNS 0.17
452 TestNetworkPlugins/group/bridge/Localhost 0.14
453 TestNetworkPlugins/group/bridge/HairPin 0.11
455 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.36
456 TestStartStop/group/old-k8s-version/serial/DeployApp 12.67
457 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.11
458 TestStartStop/group/old-k8s-version/serial/Stop 83.39
459 TestStartStop/group/no-preload/serial/DeployApp 11.33
460 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
461 TestStartStop/group/no-preload/serial/Stop 72.93
462 TestStartStop/group/embed-certs/serial/DeployApp 9.26
463 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
464 TestStartStop/group/embed-certs/serial/Stop 78.71
465 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.25
466 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.92
467 TestStartStop/group/default-k8s-diff-port/serial/Stop 87.53
468 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
469 TestStartStop/group/old-k8s-version/serial/SecondStart 51.93
470 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.14
471 TestStartStop/group/no-preload/serial/SecondStart 416.77
472 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
473 TestStartStop/group/embed-certs/serial/SecondStart 398.81
475 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
476 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 401.31
484 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
485 TestStartStop/group/old-k8s-version/serial/Pause 2.79
487 TestStartStop/group/newest-cni/serial/FirstStart 38.67
488 TestStartStop/group/newest-cni/serial/DeployApp 0
489 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.98
490 TestStartStop/group/newest-cni/serial/Stop 81.26
491 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
492 TestStartStop/group/newest-cni/serial/SecondStart 394.68
493 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.2
494 TestStartStop/group/no-preload/serial/Pause 2.64
505 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.19
506 TestStartStop/group/embed-certs/serial/Pause 2.33
507 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.19
508 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.31
509 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
510 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
511 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.19
512 TestStartStop/group/newest-cni/serial/Pause 2.33
x
+
TestDownloadOnly/v1.28.0/json-events (22.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-591868 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-591868 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (22.058670746s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (22.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1219 02:25:19.057620    8937 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1219 02:25:19.057712    8937 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-591868
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-591868: exit status 85 (67.683095ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-591868 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-591868 │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:24:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:24:57.049149    8949 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:24:57.049360    8949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:24:57.049368    8949 out.go:374] Setting ErrFile to fd 2...
	I1219 02:24:57.049372    8949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:24:57.049533    8949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	W1219 02:24:57.049667    8949 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22230-5010/.minikube/config/config.json: open /home/jenkins/minikube-integration/22230-5010/.minikube/config/config.json: no such file or directory
	I1219 02:24:57.050083    8949 out.go:368] Setting JSON to true
	I1219 02:24:57.050926    8949 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":441,"bootTime":1766110656,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:24:57.050971    8949 start.go:143] virtualization: kvm guest
	I1219 02:24:57.054901    8949 out.go:99] [download-only-591868] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1219 02:24:57.054998    8949 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball: no such file or directory
	I1219 02:24:57.055040    8949 notify.go:221] Checking for updates...
	I1219 02:24:57.055957    8949 out.go:171] MINIKUBE_LOCATION=22230
	I1219 02:24:57.056954    8949 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:24:57.058391    8949 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 02:24:57.059412    8949 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 02:24:57.060448    8949 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1219 02:24:57.062331    8949 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1219 02:24:57.062558    8949 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:24:57.516776    8949 out.go:99] Using the kvm2 driver based on user configuration
	I1219 02:24:57.516811    8949 start.go:309] selected driver: kvm2
	I1219 02:24:57.516817    8949 start.go:928] validating driver "kvm2" against <nil>
	I1219 02:24:57.517198    8949 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:24:57.517916    8949 start_flags.go:411] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1219 02:24:57.518101    8949 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 02:24:57.518136    8949 cni.go:84] Creating CNI manager for ""
	I1219 02:24:57.518201    8949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 02:24:57.518215    8949 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1219 02:24:57.518262    8949 start.go:353] cluster config:
	{Name:download-only-591868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-591868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:24:57.518483    8949 iso.go:125] acquiring lock: {Name:mk42290af04a74a4dddf27fa33aac85c9bdccfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 02:24:57.519886    8949 out.go:99] Downloading VM boot image ...
	I1219 02:24:57.519924    8949 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22230-5010/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso
	I1219 02:25:07.451199    8949 out.go:99] Starting "download-only-591868" primary control-plane node in "download-only-591868" cluster
	I1219 02:25:07.451244    8949 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1219 02:25:07.543203    8949 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1219 02:25:07.543226    8949 cache.go:65] Caching tarball of preloaded images
	I1219 02:25:07.543377    8949 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1219 02:25:07.544854    8949 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1219 02:25:07.544872    8949 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1219 02:25:07.642752    8949 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1219 02:25:07.642882    8949 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-591868 host does not exist
	  To start a cluster, run: "minikube start -p download-only-591868"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-591868
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (9.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-317892 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-317892 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.561365156s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (9.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1219 02:25:28.976094    8937 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1219 02:25:28.976124    8937 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-317892
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-317892: exit status 85 (65.018335ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-591868 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-591868 │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-591868                                                                                                                                                 │ download-only-591868 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ start   │ -o=json --download-only -p download-only-317892 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-317892 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:25:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:25:19.462431    9197 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:25:19.462704    9197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:19.462715    9197 out.go:374] Setting ErrFile to fd 2...
	I1219 02:25:19.462721    9197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:19.462920    9197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 02:25:19.463368    9197 out.go:368] Setting JSON to true
	I1219 02:25:19.464187    9197 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":463,"bootTime":1766110656,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:25:19.464233    9197 start.go:143] virtualization: kvm guest
	I1219 02:25:19.465669    9197 out.go:99] [download-only-317892] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:25:19.465788    9197 notify.go:221] Checking for updates...
	I1219 02:25:19.466859    9197 out.go:171] MINIKUBE_LOCATION=22230
	I1219 02:25:19.468042    9197 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:25:19.469188    9197 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 02:25:19.470158    9197 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 02:25:19.471126    9197 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1219 02:25:19.472936    9197 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1219 02:25:19.473167    9197 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:25:19.501239    9197 out.go:99] Using the kvm2 driver based on user configuration
	I1219 02:25:19.501268    9197 start.go:309] selected driver: kvm2
	I1219 02:25:19.501276    9197 start.go:928] validating driver "kvm2" against <nil>
	I1219 02:25:19.501566    9197 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:25:19.502069    9197 start_flags.go:411] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1219 02:25:19.502201    9197 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 02:25:19.502230    9197 cni.go:84] Creating CNI manager for ""
	I1219 02:25:19.502290    9197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 02:25:19.502302    9197 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1219 02:25:19.502376    9197 start.go:353] cluster config:
	{Name:download-only-317892 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:download-only-317892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:25:19.502468    9197 iso.go:125] acquiring lock: {Name:mk42290af04a74a4dddf27fa33aac85c9bdccfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 02:25:19.503597    9197 out.go:99] Starting "download-only-317892" primary control-plane node in "download-only-317892" cluster
	I1219 02:25:19.503623    9197 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 02:25:19.960550    9197 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 02:25:19.960627    9197 cache.go:65] Caching tarball of preloaded images
	I1219 02:25:19.960847    9197 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 02:25:19.962376    9197 out.go:99] Downloading Kubernetes v1.34.3 preload ...
	I1219 02:25:19.962395    9197 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1219 02:25:20.063418    9197 preload.go:295] Got checksum from GCS API "fdea575627999e8631bb8fa579d884c7"
	I1219 02:25:20.063453    9197 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:fdea575627999e8631bb8fa579d884c7 -> /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-317892 host does not exist
	  To start a cluster, run: "minikube start -p download-only-317892"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-317892
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (9.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-064321 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-064321 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.153238275s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (9.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1219 02:25:38.472721    8937 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
I1219 02:25:38.472773    8937 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-064321
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-064321: exit status 85 (67.66085ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                     │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-591868 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio      │ download-only-591868 │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-591868                                                                                                                                                      │ download-only-591868 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ start   │ -o=json --download-only -p download-only-317892 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio      │ download-only-317892 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-317892                                                                                                                                                      │ download-only-317892 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ start   │ -o=json --download-only -p download-only-064321 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-064321 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:25:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:25:29.369046    9393 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:25:29.369153    9393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:29.369161    9393 out.go:374] Setting ErrFile to fd 2...
	I1219 02:25:29.369165    9393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:29.369352    9393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 02:25:29.369777    9393 out.go:368] Setting JSON to true
	I1219 02:25:29.370608    9393 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":473,"bootTime":1766110656,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:25:29.370672    9393 start.go:143] virtualization: kvm guest
	I1219 02:25:29.372208    9393 out.go:99] [download-only-064321] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:25:29.372331    9393 notify.go:221] Checking for updates...
	I1219 02:25:29.373427    9393 out.go:171] MINIKUBE_LOCATION=22230
	I1219 02:25:29.374611    9393 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:25:29.375742    9393 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 02:25:29.376782    9393 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 02:25:29.377709    9393 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1219 02:25:29.379487    9393 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1219 02:25:29.379693    9393 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:25:29.407982    9393 out.go:99] Using the kvm2 driver based on user configuration
	I1219 02:25:29.408005    9393 start.go:309] selected driver: kvm2
	I1219 02:25:29.408011    9393 start.go:928] validating driver "kvm2" against <nil>
	I1219 02:25:29.408319    9393 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:25:29.408764    9393 start_flags.go:411] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1219 02:25:29.408888    9393 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 02:25:29.408913    9393 cni.go:84] Creating CNI manager for ""
	I1219 02:25:29.408959    9393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1219 02:25:29.408972    9393 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1219 02:25:29.409008    9393 start.go:353] cluster config:
	{Name:download-only-064321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:download-only-064321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:25:29.409099    9393 iso.go:125] acquiring lock: {Name:mk42290af04a74a4dddf27fa33aac85c9bdccfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 02:25:29.410048    9393 out.go:99] Starting "download-only-064321" primary control-plane node in "download-only-064321" cluster
	I1219 02:25:29.410062    9393 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 02:25:29.565975    9393 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1219 02:25:29.565998    9393 cache.go:65] Caching tarball of preloaded images
	I1219 02:25:29.566145    9393 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 02:25:29.567633    9393 out.go:99] Downloading Kubernetes v1.35.0-rc.1 preload ...
	I1219 02:25:29.567648    9393 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1219 02:25:29.663171    9393 preload.go:295] Got checksum from GCS API "46a82b10f18f180acaede5af8ca381a9"
	I1219 02:25:29.663207    9393 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:46a82b10f18f180acaede5af8ca381a9 -> /home/jenkins/minikube-integration/22230-5010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-064321 host does not exist
	  To start a cluster, run: "minikube start -p download-only-064321"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-064321
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.33s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1219 02:25:39.841373    8937 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-761262 --alsologtostderr --binary-mirror http://127.0.0.1:37093 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-761262" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-761262
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (101.4s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-052125 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-052125 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m40.540264932s)
helpers_test.go:176: Cleaning up "offline-crio-052125" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-052125
--- PASS: TestOffline (101.40s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-959667
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-959667: exit status 85 (60.630348ms)

                                                
                                                
-- stdout --
	* Profile "addons-959667" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-959667"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-959667
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-959667: exit status 85 (59.847085ms)

                                                
                                                
-- stdout --
	* Profile "addons-959667" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-959667"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (127.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-959667 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-959667 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m7.941944492s)
--- PASS: TestAddons/Setup (127.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-959667 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-959667 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-959667 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-959667 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [1db5ca4f-1d15-4ebb-b546-a808b3122492] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [1db5ca4f-1d15-4ebb-b546-a808b3122492] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.003994741s
addons_test.go:696: (dbg) Run:  kubectl --context addons-959667 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-959667 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-959667 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 7.414482ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-n9mgj" [894e49b5-4c73-41a0-8355-e53c7d367f9b] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008567308s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-zdbp9" [9ed57fbb-7a19-4bdf-8b88-ef375ffb880b] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004128077s
addons_test.go:394: (dbg) Run:  kubectl --context addons-959667 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-959667 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-959667 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (11.751487601s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 ip
2025/12/19 02:28:31 [DEBUG] GET http://192.168.39.204:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (22.80s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.62s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.271375ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-959667
addons_test.go:334: (dbg) Run:  kubectl --context addons-959667 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.92s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-v2hbs" [e34035d7-0be3-4373-9ebd-bf5dd9db5a03] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004593989s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-959667 addons disable inspektor-gadget --alsologtostderr -v=1: (5.912449509s)
--- PASS: TestAddons/parallel/InspektorGadget (10.92s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.15s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 7.455867ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-kdr4c" [b5838bd6-a786-4099-a4ee-b68d665097a8] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008299312s
addons_test.go:465: (dbg) Run:  kubectl --context addons-959667 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-959667 addons disable metrics-server --alsologtostderr -v=1: (1.044308936s)
--- PASS: TestAddons/parallel/MetricsServer (6.15s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.14s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1219 02:28:32.024928    8937 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1219 02:28:32.028888    8937 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1219 02:28:32.028916    8937 kapi.go:107] duration metric: took 3.994158ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 4.006008ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-959667 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-959667 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [fffb4289-7ffe-4e14-a562-247501565743] Pending
helpers_test.go:353: "task-pv-pod" [fffb4289-7ffe-4e14-a562-247501565743] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 6.004225923s
addons_test.go:574: (dbg) Run:  kubectl --context addons-959667 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-959667 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-959667 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-959667 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-959667 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-959667 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-959667 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [088b69aa-028c-442b-a3f1-71cca1bd58b9] Pending
helpers_test.go:353: "task-pv-pod-restore" [088b69aa-028c-442b-a3f1-71cca1bd58b9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [088b69aa-028c-442b-a3f1-71cca1bd58b9] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00375695s
addons_test.go:616: (dbg) Run:  kubectl --context addons-959667 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-959667 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-959667 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-959667 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.838036636s)
--- PASS: TestAddons/parallel/CSI (54.14s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-959667 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-sbs2t" [15d97ffc-c6d4-4aa2-a57c-972a32e5f942] Pending
helpers_test.go:353: "headlamp-dfcdc64b-sbs2t" [15d97ffc-c6d4-4aa2-a57c-972a32e5f942] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-sbs2t" [15d97ffc-c6d4-4aa2-a57c-972a32e5f942] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004051025s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-959667 addons disable headlamp --alsologtostderr -v=1: (5.84669589s)
--- PASS: TestAddons/parallel/Headlamp (20.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-d4mrj" [9ef8ea3b-16f7-4db3-a143-a828e5efb326] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004510757s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.77s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-959667 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-959667 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-959667 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [b9e7e101-b195-4a8c-a803-048d4c7c8e0a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [b9e7e101-b195-4a8c-a803-048d4c7c8e0a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [b9e7e101-b195-4a8c-a803-048d4c7c8e0a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003720556s
addons_test.go:969: (dbg) Run:  kubectl --context addons-959667 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 ssh "cat /opt/local-path-provisioner/pvc-f78e963f-a5db-43da-8670-e54bf8a0fc73_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-959667 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-959667 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-959667 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.907213211s)
--- PASS: TestAddons/parallel/LocalPath (55.77s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-rzr4s" [fa691a5a-f568-49f5-b511-dddccd273edc] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.007361325s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.70s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-rzmgb" [9e0b1a60-c2eb-4dd3-816f-fd79415f154c] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.024287642s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-959667 addons disable yakd --alsologtostderr -v=1: (5.985598127s)
--- PASS: TestAddons/parallel/Yakd (11.01s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (87.52s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-959667
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-959667: (1m27.329320771s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-959667
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-959667
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-959667
--- PASS: TestAddons/StoppedEnableDisable (87.52s)

                                                
                                    
x
+
TestCertOptions (56.53s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-992289 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-992289 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (55.180195644s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-992289 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-992289 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-992289 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-992289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-992289
--- PASS: TestCertOptions (56.53s)

                                                
                                    
x
+
TestCertExpiration (257.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-387964 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-387964 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (55.531778949s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-387964 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-387964 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (21.016042818s)
helpers_test.go:176: Cleaning up "cert-expiration-387964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-387964
--- PASS: TestCertExpiration (257.40s)

                                                
                                    
x
+
TestForceSystemdFlag (38.82s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-589340 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-589340 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (37.584985041s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-589340 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-589340" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-589340
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-589340: (1.065655702s)
--- PASS: TestForceSystemdFlag (38.82s)

                                                
                                    
x
+
TestForceSystemdEnv (55.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-919893 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-919893 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (55.045426165s)
helpers_test.go:176: Cleaning up "force-systemd-env-919893" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-919893
--- PASS: TestForceSystemdEnv (55.97s)

                                                
                                    
x
+
TestErrorSpam/setup (34.14s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-701672 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-701672 --driver=kvm2  --container-runtime=crio
E1219 02:32:49.631791    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:32:49.637100    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:32:49.647372    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:32:49.667666    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:32:49.707921    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:32:49.788222    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:32:49.948667    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:32:50.269254    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:32:50.910162    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:32:52.190456    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:32:54.750707    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-701672 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-701672 --driver=kvm2  --container-runtime=crio: (34.143579327s)
--- PASS: TestErrorSpam/setup (34.14s)

                                                
                                    
x
+
TestErrorSpam/start (0.3s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 start --dry-run
--- PASS: TestErrorSpam/start (0.30s)

                                                
                                    
x
+
TestErrorSpam/status (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 status
--- PASS: TestErrorSpam/status (0.62s)

                                                
                                    
x
+
TestErrorSpam/pause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 pause
E1219 02:32:59.871843    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 pause
--- PASS: TestErrorSpam/pause (1.43s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 unpause
--- PASS: TestErrorSpam/unpause (1.63s)

                                                
                                    
x
+
TestErrorSpam/stop (5.32s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 stop: (1.827296035s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 stop: (1.656845702s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-701672 --log_dir /tmp/nospam-701672 stop: (1.830827767s)
--- PASS: TestErrorSpam/stop (5.32s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/test/nested/copy/8937/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199791 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1219 02:33:10.112295    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:33:30.592485    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-199791 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (47.983302659s)
--- PASS: TestFunctional/serial/StartWithProxy (47.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (59.56s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1219 02:33:56.585447    8937 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199791 --alsologtostderr -v=8
E1219 02:34:11.554160    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-199791 --alsologtostderr -v=8: (59.559599206s)
functional_test.go:678: soft start took 59.560233944s for "functional-199791" cluster.
I1219 02:34:56.145356    8937 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (59.56s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-199791 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-199791 cache add registry.k8s.io/pause:3.1: (1.047024076s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-199791 cache add registry.k8s.io/pause:3.3: (1.08719699s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-199791 /tmp/TestFunctionalserialCacheCmdcacheadd_local4223379098/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 cache add minikube-local-cache-test:functional-199791
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-199791 cache add minikube-local-cache-test:functional-199791: (1.734732965s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 cache delete minikube-local-cache-test:functional-199791
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-199791
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199791 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (166.097422ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 kubectl -- --context functional-199791 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-199791 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199791 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1219 02:35:33.477911    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-199791 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.439258113s)
functional_test.go:776: restart took 35.439364905s for "functional-199791" cluster.
I1219 02:35:38.952356    8937 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (35.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-199791 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-199791 logs: (1.182035343s)
--- PASS: TestFunctional/serial/LogsCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 logs --file /tmp/TestFunctionalserialLogsFileCmd3487646038/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-199791 logs --file /tmp/TestFunctionalserialLogsFileCmd3487646038/001/logs.txt: (1.189712104s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.82s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-199791 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-199791
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-199791: exit status 115 (212.112212ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.97:32049 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-199791 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.82s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199791 config get cpus: exit status 14 (59.421232ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199791 config get cpus: exit status 14 (55.686394ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199791 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-199791 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (101.548439ms)

                                                
                                                
-- stdout --
	* [functional-199791] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:35:47.320035   14625 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:35:47.320274   14625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:35:47.320282   14625 out.go:374] Setting ErrFile to fd 2...
	I1219 02:35:47.320286   14625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:35:47.320455   14625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 02:35:47.320860   14625 out.go:368] Setting JSON to false
	I1219 02:35:47.321638   14625 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1091,"bootTime":1766110656,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:35:47.321684   14625 start.go:143] virtualization: kvm guest
	I1219 02:35:47.323228   14625 out.go:179] * [functional-199791] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:35:47.324394   14625 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:35:47.324383   14625 notify.go:221] Checking for updates...
	I1219 02:35:47.326261   14625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:35:47.327330   14625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 02:35:47.328320   14625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 02:35:47.329246   14625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:35:47.330179   14625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:35:47.331594   14625 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:35:47.332045   14625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:35:47.362157   14625 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 02:35:47.363117   14625 start.go:309] selected driver: kvm2
	I1219 02:35:47.363151   14625 start.go:928] validating driver "kvm2" against &{Name:functional-199791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-199791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:35:47.363231   14625 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:35:47.365160   14625 out.go:203] 
	W1219 02:35:47.366079   14625 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1219 02:35:47.367046   14625 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199791 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199791 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-199791 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (117.189607ms)

                                                
                                                
-- stdout --
	* [functional-199791] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:35:47.206966   14598 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:35:47.207158   14598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:35:47.207169   14598 out.go:374] Setting ErrFile to fd 2...
	I1219 02:35:47.207176   14598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:35:47.208464   14598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 02:35:47.208993   14598 out.go:368] Setting JSON to false
	I1219 02:35:47.209816   14598 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1091,"bootTime":1766110656,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:35:47.209901   14598 start.go:143] virtualization: kvm guest
	I1219 02:35:47.212408   14598 out.go:179] * [functional-199791] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1219 02:35:47.213856   14598 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:35:47.213854   14598 notify.go:221] Checking for updates...
	I1219 02:35:47.215996   14598 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:35:47.218070   14598 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 02:35:47.222117   14598 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 02:35:47.223232   14598 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:35:47.224322   14598 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:35:47.228292   14598 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:35:47.228869   14598 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:35:47.259496   14598 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1219 02:35:47.260592   14598 start.go:309] selected driver: kvm2
	I1219 02:35:47.260608   14598 start.go:928] validating driver "kvm2" against &{Name:functional-199791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-199791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:35:47.260728   14598 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:35:47.262536   14598 out.go:203] 
	W1219 02:35:47.263602   14598 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1219 02:35:47.264638   14598 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (246.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [009e7ad8-75b8-4205-91aa-980d65bb83a4] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005295058s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-199791 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-199791 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-199791 get pvc myclaim -o=json
I1219 02:35:51.474778    8937 retry.go:31] will retry after 1.317076579s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:7e13e0a0-32d7-4cb0-9749-e701909bee51 ResourceVersion:733 Generation:0 CreationTimestamp:2025-12-19 02:35:51 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc00154f100 VolumeMode:0xc00154f110 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-199791 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-199791 apply -f testdata/storage-provisioner/pod.yaml
I1219 02:35:53.016145    8937 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [c0763677-d124-4a1a-be82-06a44fc9800f] Pending
helpers_test.go:353: "sp-pod" [c0763677-d124-4a1a-be82-06a44fc9800f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [c0763677-d124-4a1a-be82-06a44fc9800f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 3m51.003985125s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-199791 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-199791 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-199791 delete -f testdata/storage-provisioner/pod.yaml: (1.248960497s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-199791 apply -f testdata/storage-provisioner/pod.yaml
I1219 02:39:45.506983    8937 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [1bb2654b-9961-46d3-b74f-9fbbcb8c3dc8] Pending
helpers_test.go:353: "sp-pod" [1bb2654b-9961-46d3-b74f-9fbbcb8c3dc8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00342102s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-199791 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (246.41s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh -n functional-199791 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 cp functional-199791:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3520073410/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh -n functional-199791 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh -n functional-199791 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (264.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-199791 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-qcs65" [b0bf0475-2199-4014-9972-16c0ce9f2b22] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-qcs65" [b0bf0475-2199-4014-9972-16c0ce9f2b22] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 4m17.103526128s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-199791 exec mysql-6bcdcbc558-qcs65 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-199791 exec mysql-6bcdcbc558-qcs65 -- mysql -ppassword -e "show databases;": exit status 1 (141.219167ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:44:09.172580    8937 retry.go:31] will retry after 1.496610926s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-199791 exec mysql-6bcdcbc558-qcs65 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-199791 exec mysql-6bcdcbc558-qcs65 -- mysql -ppassword -e "show databases;": exit status 1 (160.740998ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:44:10.830952    8937 retry.go:31] will retry after 2.100400206s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-199791 exec mysql-6bcdcbc558-qcs65 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-199791 exec mysql-6bcdcbc558-qcs65 -- mysql -ppassword -e "show databases;": exit status 1 (219.932901ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:44:13.152616    8937 retry.go:31] will retry after 2.929312252s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-199791 exec mysql-6bcdcbc558-qcs65 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (264.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8937/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "sudo cat /etc/test/nested/copy/8937/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8937.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "sudo cat /etc/ssl/certs/8937.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8937.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "sudo cat /usr/share/ca-certificates/8937.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/89372.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "sudo cat /etc/ssl/certs/89372.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/89372.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "sudo cat /usr/share/ca-certificates/89372.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-199791 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199791 ssh "sudo systemctl is-active docker": exit status 1 (154.23317ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199791 ssh "sudo systemctl is-active containerd": exit status 1 (156.514241ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "233.312097ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "67.233594ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "247.911333ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "65.97745ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (38.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-199791 /tmp/TestFunctionalparallelMountCmdany-port1657076071/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766111746281552131" to /tmp/TestFunctionalparallelMountCmdany-port1657076071/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766111746281552131" to /tmp/TestFunctionalparallelMountCmdany-port1657076071/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766111746281552131" to /tmp/TestFunctionalparallelMountCmdany-port1657076071/001/test-1766111746281552131
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199791 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (158.625266ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:35:46.440532    8937 retry.go:31] will retry after 636.920686ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 19 02:35 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 19 02:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 19 02:35 test-1766111746281552131
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh cat /mount-9p/test-1766111746281552131
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-199791 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [22cf93c9-d4ef-4b69-aa11-9269bc23bee3] Pending
helpers_test.go:353: "busybox-mount" [22cf93c9-d4ef-4b69-aa11-9269bc23bee3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [22cf93c9-d4ef-4b69-aa11-9269bc23bee3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [22cf93c9-d4ef-4b69-aa11-9269bc23bee3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 36.003524065s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-199791 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199791 /tmp/TestFunctionalparallelMountCmdany-port1657076071/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (38.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-199791 /tmp/TestFunctionalparallelMountCmdspecific-port646075116/001:/mount-9p --alsologtostderr -v=1 --port 41387]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199791 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (155.505911ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:36:24.472264    8937 retry.go:31] will retry after 586.669497ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199791 /tmp/TestFunctionalparallelMountCmdspecific-port646075116/001:/mount-9p --alsologtostderr -v=1 --port 41387] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199791 ssh "sudo umount -f /mount-9p": exit status 1 (156.433078ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-199791 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199791 /tmp/TestFunctionalparallelMountCmdspecific-port646075116/001:/mount-9p --alsologtostderr -v=1 --port 41387] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-199791 /tmp/TestFunctionalparallelMountCmdVerifyCleanup735182126/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-199791 /tmp/TestFunctionalparallelMountCmdVerifyCleanup735182126/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-199791 /tmp/TestFunctionalparallelMountCmdVerifyCleanup735182126/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199791 ssh "findmnt -T" /mount1: exit status 1 (166.403787ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:36:25.896351    8937 retry.go:31] will retry after 558.445099ms: exit status 1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-199791 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199791 /tmp/TestFunctionalparallelMountCmdVerifyCleanup735182126/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199791 /tmp/TestFunctionalparallelMountCmdVerifyCleanup735182126/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199791 /tmp/TestFunctionalparallelMountCmdVerifyCleanup735182126/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-199791 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-199791
localhost/kicbase/echo-server:functional-199791
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-199791 image ls --format short --alsologtostderr:
I1219 02:40:49.897248   16452 out.go:360] Setting OutFile to fd 1 ...
I1219 02:40:49.897474   16452 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:40:49.897481   16452 out.go:374] Setting ErrFile to fd 2...
I1219 02:40:49.897486   16452 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:40:49.897703   16452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
I1219 02:40:49.898184   16452 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:40:49.898275   16452 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:40:49.900362   16452 ssh_runner.go:195] Run: systemctl --version
I1219 02:40:49.902368   16452 main.go:144] libmachine: domain functional-199791 has defined MAC address 52:54:00:47:b9:97 in network mk-functional-199791
I1219 02:40:49.902715   16452 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:47:b9:97", ip: ""} in network mk-functional-199791: {Iface:virbr1 ExpiryTime:2025-12-19 03:33:22 +0000 UTC Type:0 Mac:52:54:00:47:b9:97 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-199791 Clientid:01:52:54:00:47:b9:97}
I1219 02:40:49.902739   16452 main.go:144] libmachine: domain functional-199791 has defined IP address 192.168.39.97 and MAC address 52:54:00:47:b9:97 in network mk-functional-199791
I1219 02:40:49.902870   16452 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/functional-199791/id_rsa Username:docker}
I1219 02:40:49.984974   16452 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-199791 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.3            │ 5826b25d990d7 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/my-image                      │ functional-199791  │ 300f679724b60 │ 1.47MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.3            │ aa27095f56193 │ 89.1MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ 04da2b0513cd7 │ 55.2MB │
│ registry.k8s.io/kube-proxy              │ v1.34.3            │ 36eef8e07bdd6 │ 73.1MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/kicbase/echo-server           │ functional-199791  │ 9056ab77afb8e │ 4.94MB │
│ localhost/minikube-local-cache-test     │ functional-199791  │ bd14014b4b75a │ 3.33kB │
│ registry.k8s.io/kube-scheduler          │ v1.34.3            │ aec12dadf56dd │ 53.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-199791 image ls --format table --alsologtostderr:
I1219 02:40:53.857909   16534 out.go:360] Setting OutFile to fd 1 ...
I1219 02:40:53.857996   16534 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:40:53.858004   16534 out.go:374] Setting ErrFile to fd 2...
I1219 02:40:53.858008   16534 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:40:53.858231   16534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
I1219 02:40:53.858726   16534 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:40:53.858812   16534 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:40:53.860694   16534 ssh_runner.go:195] Run: systemctl --version
I1219 02:40:53.862668   16534 main.go:144] libmachine: domain functional-199791 has defined MAC address 52:54:00:47:b9:97 in network mk-functional-199791
I1219 02:40:53.863033   16534 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:47:b9:97", ip: ""} in network mk-functional-199791: {Iface:virbr1 ExpiryTime:2025-12-19 03:33:22 +0000 UTC Type:0 Mac:52:54:00:47:b9:97 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-199791 Clientid:01:52:54:00:47:b9:97}
I1219 02:40:53.863056   16534 main.go:144] libmachine: domain functional-199791 has defined IP address 192.168.39.97 and MAC address 52:54:00:47:b9:97 in network mk-functional-199791
I1219 02:40:53.863189   16534 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/functional-199791/id_rsa Username:docker}
I1219 02:40:53.941713   16534 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-199791 image ls --format json --alsologtostderr:
[{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460","registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"89050097"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"a3e246e9556e93d71e
2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":["registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6","registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"73145241"},{"id":"aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":["registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9","registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/kube-scheduler:v1.3
4.3"],"size":"53853013"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"bd14014b4b75adc182145d1e300fd4204bd8e1109d5539ab5e74fe1e4332ddc4","repoDigests":["localhost/minikube-local-cache-test@sha256:a739a2434ea3709446b058f592d2f837178ab57b936fb72678bca2969f929cc0"],"repoTags":["localhost/minikube-local-cache-test:functional-199791"],"size":"3330"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoD
igests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"490ffcb314f4498be7edd8fd15cf85434f46aa525da03579c1917faec0dfcece","repoDigests":["docker.io/library/2223861967e40e9d99e74e58d75d7ab5ee6e261fe56dd45c63e62cb6847d69bf-tmp@sha256:1f85930d070517acc422cfa2f6ac9d92643d6f6b7ff02340de75c659bcb0b56e"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDige
sts":["public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55157106"},{"id":"5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954","registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"76004183"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":
"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-199791"],"size":"4943877"},{"id":"300f679724b607254be829b88ffa5f0cfce1822dfc89e1e490783e1db4525a58","repoDigests":["localhost/my-image@sha256:c8eb3c64a5f1853944dd8a719f8acb73a7
4c5bea568ea12234a678d0c9352da6"],"repoTags":["localhost/my-image:functional-199791"],"size":"1468600"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-199791 image ls --format json --alsologtostderr:
I1219 02:40:53.680876   16523 out.go:360] Setting OutFile to fd 1 ...
I1219 02:40:53.680971   16523 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:40:53.680979   16523 out.go:374] Setting ErrFile to fd 2...
I1219 02:40:53.680985   16523 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:40:53.681178   16523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
I1219 02:40:53.681765   16523 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:40:53.681854   16523 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:40:53.683731   16523 ssh_runner.go:195] Run: systemctl --version
I1219 02:40:53.685849   16523 main.go:144] libmachine: domain functional-199791 has defined MAC address 52:54:00:47:b9:97 in network mk-functional-199791
I1219 02:40:53.686205   16523 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:47:b9:97", ip: ""} in network mk-functional-199791: {Iface:virbr1 ExpiryTime:2025-12-19 03:33:22 +0000 UTC Type:0 Mac:52:54:00:47:b9:97 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-199791 Clientid:01:52:54:00:47:b9:97}
I1219 02:40:53.686228   16523 main.go:144] libmachine: domain functional-199791 has defined IP address 192.168.39.97 and MAC address 52:54:00:47:b9:97 in network mk-functional-199791
I1219 02:40:53.686345   16523 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/functional-199791/id_rsa Username:docker}
I1219 02:40:53.769200   16523 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-199791 image ls --format yaml --alsologtostderr:
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "53853013"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55157106"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
- registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "76004183"
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
- registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "73145241"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
- registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "89050097"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-199791
size: "4943877"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: bd14014b4b75adc182145d1e300fd4204bd8e1109d5539ab5e74fe1e4332ddc4
repoDigests:
- localhost/minikube-local-cache-test@sha256:a739a2434ea3709446b058f592d2f837178ab57b936fb72678bca2969f929cc0
repoTags:
- localhost/minikube-local-cache-test:functional-199791
size: "3330"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-199791 image ls --format yaml --alsologtostderr:
I1219 02:40:50.083160   16463 out.go:360] Setting OutFile to fd 1 ...
I1219 02:40:50.083388   16463 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:40:50.083397   16463 out.go:374] Setting ErrFile to fd 2...
I1219 02:40:50.083401   16463 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:40:50.083585   16463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
I1219 02:40:50.084058   16463 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:40:50.084139   16463 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:40:50.086228   16463 ssh_runner.go:195] Run: systemctl --version
I1219 02:40:50.088362   16463 main.go:144] libmachine: domain functional-199791 has defined MAC address 52:54:00:47:b9:97 in network mk-functional-199791
I1219 02:40:50.088752   16463 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:47:b9:97", ip: ""} in network mk-functional-199791: {Iface:virbr1 ExpiryTime:2025-12-19 03:33:22 +0000 UTC Type:0 Mac:52:54:00:47:b9:97 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-199791 Clientid:01:52:54:00:47:b9:97}
I1219 02:40:50.088776   16463 main.go:144] libmachine: domain functional-199791 has defined IP address 192.168.39.97 and MAC address 52:54:00:47:b9:97 in network mk-functional-199791
I1219 02:40:50.088928   16463 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/functional-199791/id_rsa Username:docker}
I1219 02:40:50.170888   16463 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199791 ssh pgrep buildkitd: exit status 1 (145.245172ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image build -t localhost/my-image:functional-199791 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-199791 image build -t localhost/my-image:functional-199791 testdata/build --alsologtostderr: (3.093601272s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-199791 image build -t localhost/my-image:functional-199791 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 490ffcb314f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-199791
--> 300f679724b
Successfully tagged localhost/my-image:functional-199791
300f679724b607254be829b88ffa5f0cfce1822dfc89e1e490783e1db4525a58
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-199791 image build -t localhost/my-image:functional-199791 testdata/build --alsologtostderr:
I1219 02:40:50.405612   16485 out.go:360] Setting OutFile to fd 1 ...
I1219 02:40:50.405739   16485 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:40:50.405748   16485 out.go:374] Setting ErrFile to fd 2...
I1219 02:40:50.405751   16485 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:40:50.405947   16485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
I1219 02:40:50.406448   16485 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:40:50.406961   16485 config.go:182] Loaded profile config "functional-199791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:40:50.408902   16485 ssh_runner.go:195] Run: systemctl --version
I1219 02:40:50.411127   16485 main.go:144] libmachine: domain functional-199791 has defined MAC address 52:54:00:47:b9:97 in network mk-functional-199791
I1219 02:40:50.411534   16485 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:47:b9:97", ip: ""} in network mk-functional-199791: {Iface:virbr1 ExpiryTime:2025-12-19 03:33:22 +0000 UTC Type:0 Mac:52:54:00:47:b9:97 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-199791 Clientid:01:52:54:00:47:b9:97}
I1219 02:40:50.411562   16485 main.go:144] libmachine: domain functional-199791 has defined IP address 192.168.39.97 and MAC address 52:54:00:47:b9:97 in network mk-functional-199791
I1219 02:40:50.411748   16485 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/functional-199791/id_rsa Username:docker}
I1219 02:40:50.490681   16485 build_images.go:162] Building image from path: /tmp/build.1452325768.tar
I1219 02:40:50.490733   16485 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1219 02:40:50.502608   16485 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1452325768.tar
I1219 02:40:50.508674   16485 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1452325768.tar: stat -c "%s %y" /var/lib/minikube/build/build.1452325768.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1452325768.tar': No such file or directory
I1219 02:40:50.508698   16485 ssh_runner.go:362] scp /tmp/build.1452325768.tar --> /var/lib/minikube/build/build.1452325768.tar (3072 bytes)
I1219 02:40:50.538903   16485 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1452325768
I1219 02:40:50.550213   16485 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1452325768 -xf /var/lib/minikube/build/build.1452325768.tar
I1219 02:40:50.560866   16485 crio.go:315] Building image: /var/lib/minikube/build/build.1452325768
I1219 02:40:50.560926   16485 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-199791 /var/lib/minikube/build/build.1452325768 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1219 02:40:53.413173   16485 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-199791 /var/lib/minikube/build/build.1452325768 --cgroup-manager=cgroupfs: (2.852214539s)
I1219 02:40:53.413244   16485 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1452325768
I1219 02:40:53.427854   16485 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1452325768.tar
I1219 02:40:53.439261   16485 build_images.go:218] Built localhost/my-image:functional-199791 from /tmp/build.1452325768.tar
I1219 02:40:53.439300   16485 build_images.go:134] succeeded building to: functional-199791
I1219 02:40:53.439310   16485 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.748089791s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-199791
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image load --daemon kicbase/echo-server:functional-199791 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-199791 image load --daemon kicbase/echo-server:functional-199791 --alsologtostderr: (1.053679627s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image load --daemon kicbase/echo-server:functional-199791 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-199791
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image load --daemon kicbase/echo-server:functional-199791 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image save kicbase/echo-server:functional-199791 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image rm kicbase/echo-server:functional-199791 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-199791
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 image save --daemon kicbase/echo-server:functional-199791 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-199791
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 update-context --alsologtostderr -v=2
E1219 02:42:49.625733    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-199791 service list: (2.385339463s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-199791 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-199791 service list -o json: (2.384106862s)
functional_test.go:1504: Took "2.384185244s" to run "out/minikube-linux-amd64 -p functional-199791 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (2.38s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-199791
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-199791
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-199791
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22230-5010/.minikube/files/etc/test/nested/copy/8937/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (80.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-936345 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1219 02:47:49.629451    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-936345 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m20.592427268s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (80.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (31.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1219 02:48:00.774021    8937 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-936345 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-936345 --alsologtostderr -v=8: (31.165318228s)
functional_test.go:678: soft start took 31.165637374s for "functional-936345" cluster.
I1219 02:48:31.939651    8937 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (31.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-936345 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-936345 cache add registry.k8s.io/pause:3.1: (1.071689336s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-936345 cache add registry.k8s.io/pause:3.3: (1.118318789s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-936345 cache add registry.k8s.io/pause:latest: (1.070297506s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (2.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-936345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC3818296843/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 cache add minikube-local-cache-test:functional-936345
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-936345 cache add minikube-local-cache-test:functional-936345: (1.731928407s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 cache delete minikube-local-cache-test:functional-936345
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-936345
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (2.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936345 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (167.167284ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 kubectl -- --context functional-936345 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-936345 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (32.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-936345 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-936345 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.066576164s)
functional_test.go:776: restart took 32.066734499s for "functional-936345" cluster.
I1219 02:49:11.408894    8937 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (32.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-936345 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 logs
E1219 02:49:12.679499    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-936345 logs: (1.224300089s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi1579073133/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-936345 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi1579073133/001/logs.txt: (1.201011932s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-936345 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-936345
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-936345: exit status 115 (225.602842ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.80:32584 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-936345 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936345 config get cpus: exit status 14 (71.052078ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936345 config get cpus: exit status 14 (56.180113ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-936345 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-936345 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (109.546579ms)

                                                
                                                
-- stdout --
	* [functional-936345] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:49:20.505015   19625 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:49:20.505243   19625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:49:20.505251   19625 out.go:374] Setting ErrFile to fd 2...
	I1219 02:49:20.505255   19625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:49:20.505429   19625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 02:49:20.505819   19625 out.go:368] Setting JSON to false
	I1219 02:49:20.506680   19625 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1904,"bootTime":1766110656,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:49:20.506725   19625 start.go:143] virtualization: kvm guest
	I1219 02:49:20.508218   19625 out.go:179] * [functional-936345] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:49:20.509268   19625 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:49:20.509262   19625 notify.go:221] Checking for updates...
	I1219 02:49:20.511118   19625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:49:20.512258   19625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 02:49:20.513248   19625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 02:49:20.514229   19625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:49:20.515084   19625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:49:20.516371   19625 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 02:49:20.516944   19625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:49:20.549689   19625 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 02:49:20.550735   19625 start.go:309] selected driver: kvm2
	I1219 02:49:20.550747   19625 start.go:928] validating driver "kvm2" against &{Name:functional-936345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-936345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:49:20.550852   19625 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:49:20.553077   19625 out.go:203] 
	W1219 02:49:20.554089   19625 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1219 02:49:20.555043   19625 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-936345 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-936345 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-936345 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (114.137949ms)

                                                
                                                
-- stdout --
	* [functional-936345] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:49:20.389513   19598 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:49:20.389629   19598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:49:20.389640   19598 out.go:374] Setting ErrFile to fd 2...
	I1219 02:49:20.389648   19598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:49:20.390056   19598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 02:49:20.390564   19598 out.go:368] Setting JSON to false
	I1219 02:49:20.391668   19598 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1904,"bootTime":1766110656,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:49:20.391736   19598 start.go:143] virtualization: kvm guest
	I1219 02:49:20.393844   19598 out.go:179] * [functional-936345] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1219 02:49:20.398661   19598 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:49:20.398663   19598 notify.go:221] Checking for updates...
	I1219 02:49:20.400498   19598 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:49:20.401493   19598 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 02:49:20.402487   19598 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 02:49:20.403380   19598 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:49:20.404383   19598 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:49:20.405735   19598 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 02:49:20.406142   19598 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:49:20.437090   19598 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1219 02:49:20.437982   19598 start.go:309] selected driver: kvm2
	I1219 02:49:20.437996   19598 start.go:928] validating driver "kvm2" against &{Name:functional-936345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-936345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:49:20.438121   19598 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:49:20.443883   19598 out.go:203] 
	W1219 02:49:20.444769   19598 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1219 02:49:20.445610   19598 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (239.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [cd92a8c6-e659-4184-bcbd-43da477075c7] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.002712483s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-936345 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-936345 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-936345 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-936345 apply -f testdata/storage-provisioner/pod.yaml
I1219 02:54:28.930738    8937 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [4331f30c-327e-4bfc-8aa8-17c654da5066] Pending
helpers_test.go:353: "sp-pod" [4331f30c-327e-4bfc-8aa8-17c654da5066] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [4331f30c-327e-4bfc-8aa8-17c654da5066] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 3m47.003267596s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-936345 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-936345 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-936345 delete -f testdata/storage-provisioner/pod.yaml: (1.126945238s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-936345 apply -f testdata/storage-provisioner/pod.yaml
I1219 02:58:17.306416    8937 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [7e6b459d-dc6a-4646-a8d0-bca11659050f] Pending
helpers_test.go:353: "sp-pod" [7e6b459d-dc6a-4646-a8d0-bca11659050f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.005623187s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-936345 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (239.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh -n functional-936345 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 cp functional-936345:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm2796251211/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh -n functional-936345 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh -n functional-936345 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (287.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-936345 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-c6dqh" [367404f1-003b-464d-8fb8-d9c1dba5d64c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1219 02:50:45.208337    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:50:45.213642    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:50:45.223964    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:50:45.244208    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:50:45.284437    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:50:45.365163    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:50:45.525489    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:50:45.845947    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:50:46.486447    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:50:47.767503    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:50:50.328232    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:50:55.449375    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:51:05.690048    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:51:26.170899    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:52:07.132087    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:52:49.625653    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:53:29.052949    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "mysql-7d7b65bc95-c6dqh" [367404f1-003b-464d-8fb8-d9c1dba5d64c] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 4m42.005967841s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-936345 exec mysql-7d7b65bc95-c6dqh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-936345 exec mysql-7d7b65bc95-c6dqh -- mysql -ppassword -e "show databases;": exit status 1 (192.238454ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:55:21.830237    8937 retry.go:31] will retry after 1.196521543s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-936345 exec mysql-7d7b65bc95-c6dqh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-936345 exec mysql-7d7b65bc95-c6dqh -- mysql -ppassword -e "show databases;": exit status 1 (162.024533ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:55:23.189523    8937 retry.go:31] will retry after 1.006825197s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-936345 exec mysql-7d7b65bc95-c6dqh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-936345 exec mysql-7d7b65bc95-c6dqh -- mysql -ppassword -e "show databases;": exit status 1 (134.005978ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:55:24.331441    8937 retry.go:31] will retry after 2.822863062s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-936345 exec mysql-7d7b65bc95-c6dqh -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (287.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8937/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "sudo cat /etc/test/nested/copy/8937/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8937.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "sudo cat /etc/ssl/certs/8937.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8937.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "sudo cat /usr/share/ca-certificates/8937.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/89372.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "sudo cat /etc/ssl/certs/89372.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/89372.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "sudo cat /usr/share/ca-certificates/89372.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-936345 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936345 ssh "sudo systemctl is-active docker": exit status 1 (155.303381ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936345 ssh "sudo systemctl is-active containerd": exit status 1 (159.74999ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "251.303056ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "61.918434ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "248.849349ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "55.655241ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (69.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-936345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2228672934/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766112559480270103" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2228672934/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766112559480270103" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2228672934/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766112559480270103" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2228672934/001/test-1766112559480270103
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936345 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (173.334826ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:49:19.653898    8937 retry.go:31] will retry after 409.279318ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 19 02:49 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 19 02:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 19 02:49 test-1766112559480270103
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh cat /mount-9p/test-1766112559480270103
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-936345 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [cbe8a0dd-5db0-4c8a-a076-c2b0c28c1f82] Pending
helpers_test.go:353: "busybox-mount" [cbe8a0dd-5db0-4c8a-a076-c2b0c28c1f82] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [cbe8a0dd-5db0-4c8a-a076-c2b0c28c1f82] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [cbe8a0dd-5db0-4c8a-a076-c2b0c28c1f82] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m8.003099071s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-936345 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-936345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2228672934/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (69.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-936345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3049443289/001:/mount-9p --alsologtostderr -v=1 --port 42089]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936345 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (160.756474ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:50:29.499131    8937 retry.go:31] will retry after 333.180793ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-936345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3049443289/001:/mount-9p --alsologtostderr -v=1 --port 42089] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936345 ssh "sudo umount -f /mount-9p": exit status 1 (151.294413ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-936345 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-936345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3049443289/001:/mount-9p --alsologtostderr -v=1 --port 42089] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-936345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2523260388/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-936345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2523260388/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-936345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2523260388/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936345 ssh "findmnt -T" /mount1: exit status 1 (169.268288ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:50:30.669643    8937 retry.go:31] will retry after 405.836079ms: exit status 1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-936345 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-936345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2523260388/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-936345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2523260388/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-936345 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2523260388/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-936345 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-936345
localhost/kicbase/echo-server:functional-936345
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-936345 image ls --format short --alsologtostderr:
I1219 02:55:27.546982   21575 out.go:360] Setting OutFile to fd 1 ...
I1219 02:55:27.547064   21575 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:55:27.547071   21575 out.go:374] Setting ErrFile to fd 2...
I1219 02:55:27.547075   21575 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:55:27.547255   21575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
I1219 02:55:27.547825   21575 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:55:27.547921   21575 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:55:27.549822   21575 ssh_runner.go:195] Run: systemctl --version
I1219 02:55:27.551970   21575 main.go:144] libmachine: domain functional-936345 has defined MAC address 52:54:00:63:7d:56 in network mk-functional-936345
I1219 02:55:27.552305   21575 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:63:7d:56", ip: ""} in network mk-functional-936345: {Iface:virbr1 ExpiryTime:2025-12-19 03:46:54 +0000 UTC Type:0 Mac:52:54:00:63:7d:56 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:functional-936345 Clientid:01:52:54:00:63:7d:56}
I1219 02:55:27.552324   21575 main.go:144] libmachine: domain functional-936345 has defined IP address 192.168.39.80 and MAC address 52:54:00:63:7d:56 in network mk-functional-936345
I1219 02:55:27.552443   21575 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/functional-936345/id_rsa Username:docker}
I1219 02:55:27.636506   21575 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-936345 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-936345  │ bd14014b4b75a │ 3.33kB │
│ localhost/my-image                      │ functional-936345  │ 049141a6df0b8 │ 1.47MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-rc.1       │ 73f80cdc073da │ 52.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/etcd                    │ 3.6.6-0            │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-rc.1       │ 58865405a13bc │ 90.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-rc.1       │ 5032a56602e1b │ 76.9MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-rc.1       │ af0321f3a4f38 │ 72MB   │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/kicbase/echo-server           │ functional-936345  │ 9056ab77afb8e │ 4.94MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-936345 image ls --format table --alsologtostderr:
I1219 02:55:31.652534   21656 out.go:360] Setting OutFile to fd 1 ...
I1219 02:55:31.652691   21656 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:55:31.652701   21656 out.go:374] Setting ErrFile to fd 2...
I1219 02:55:31.652707   21656 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:55:31.652884   21656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
I1219 02:55:31.653413   21656 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:55:31.653532   21656 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:55:31.655509   21656 ssh_runner.go:195] Run: systemctl --version
I1219 02:55:31.657585   21656 main.go:144] libmachine: domain functional-936345 has defined MAC address 52:54:00:63:7d:56 in network mk-functional-936345
I1219 02:55:31.657948   21656 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:63:7d:56", ip: ""} in network mk-functional-936345: {Iface:virbr1 ExpiryTime:2025-12-19 03:46:54 +0000 UTC Type:0 Mac:52:54:00:63:7d:56 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:functional-936345 Clientid:01:52:54:00:63:7d:56}
I1219 02:55:31.657976   21656 main.go:144] libmachine: domain functional-936345 has defined IP address 192.168.39.80 and MAC address 52:54:00:63:7d:56 in network mk-functional-936345
I1219 02:55:31.658131   21656 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/functional-936345/id_rsa Username:docker}
I1219 02:55:31.741923   21656 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-936345 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["
gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
","registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"76893010"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"ff04dc2f6088539a811991c6c0b697b3c22d499834b0fcb1cd6446b15639abc1","repoDigests":["docker.io/library/33bceff56b3f42b99efcc20fd051c60dbce41331710084cdd6361f69bda7077a-tmp@sha256:fced6e649b22365f9694ad
d1ce9575b1bd22ff6723dbfec7eafd868d5d9dd683"],"repoTags":[],"size":"1466018"},{"id":"bd14014b4b75adc182145d1e300fd4204bd8e1109d5539ab5e74fe1e4332ddc4","repoDigests":["localhost/minikube-local-cache-test@sha256:a739a2434ea3709446b058f592d2f837178ab57b936fb72678bca2969f929cc0"],"repoTags":["localhost/minikube-local-cache-test:functional-936345"],"size":"3330"},{"id":"049141a6df0b81265127cf8de9a7c5385ffc090e71580d5c3add8f42e20b198c","repoDigests":["localhost/my-image@sha256:12e1171ea22fcea404cd693a7e698c84e4ca931ae959c34df65d608b4533bb2d"],"repoTags":["localhost/my-image:functional-936345"],"size":"1468599"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"0a108f7189562e99793bd
ecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636","registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"52763474"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["l
ocalhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-936345"],"size":"4943877"},{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f","registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"90844140"},{"id":"af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9","registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"71986585"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.
k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-936345 image ls --format json --alsologtostderr:
I1219 02:55:31.468886   21645 out.go:360] Setting OutFile to fd 1 ...
I1219 02:55:31.469140   21645 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:55:31.469149   21645 out.go:374] Setting ErrFile to fd 2...
I1219 02:55:31.469153   21645 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:55:31.469320   21645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
I1219 02:55:31.469827   21645 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:55:31.469917   21645 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:55:31.471802   21645 ssh_runner.go:195] Run: systemctl --version
I1219 02:55:31.473683   21645 main.go:144] libmachine: domain functional-936345 has defined MAC address 52:54:00:63:7d:56 in network mk-functional-936345
I1219 02:55:31.473964   21645 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:63:7d:56", ip: ""} in network mk-functional-936345: {Iface:virbr1 ExpiryTime:2025-12-19 03:46:54 +0000 UTC Type:0 Mac:52:54:00:63:7d:56 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:functional-936345 Clientid:01:52:54:00:63:7d:56}
I1219 02:55:31.473985   21645 main.go:144] libmachine: domain functional-936345 has defined IP address 192.168.39.80 and MAC address 52:54:00:63:7d:56 in network mk-functional-936345
I1219 02:55:31.474095   21645 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/functional-936345/id_rsa Username:docker}
I1219 02:55:31.559916   21645 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-936345 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "71986585"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "90844140"
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
- registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "76893010"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-936345
size: "4943877"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: bd14014b4b75adc182145d1e300fd4204bd8e1109d5539ab5e74fe1e4332ddc4
repoDigests:
- localhost/minikube-local-cache-test@sha256:a739a2434ea3709446b058f592d2f837178ab57b936fb72678bca2969f929cc0
repoTags:
- localhost/minikube-local-cache-test:functional-936345
size: "3330"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "52763474"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-936345 image ls --format yaml --alsologtostderr:
I1219 02:55:27.729233   21586 out.go:360] Setting OutFile to fd 1 ...
I1219 02:55:27.729473   21586 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:55:27.729483   21586 out.go:374] Setting ErrFile to fd 2...
I1219 02:55:27.729487   21586 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:55:27.729690   21586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
I1219 02:55:27.730174   21586 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:55:27.730265   21586 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:55:27.732191   21586 ssh_runner.go:195] Run: systemctl --version
I1219 02:55:27.734143   21586 main.go:144] libmachine: domain functional-936345 has defined MAC address 52:54:00:63:7d:56 in network mk-functional-936345
I1219 02:55:27.734437   21586 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:63:7d:56", ip: ""} in network mk-functional-936345: {Iface:virbr1 ExpiryTime:2025-12-19 03:46:54 +0000 UTC Type:0 Mac:52:54:00:63:7d:56 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:functional-936345 Clientid:01:52:54:00:63:7d:56}
I1219 02:55:27.734454   21586 main.go:144] libmachine: domain functional-936345 has defined IP address 192.168.39.80 and MAC address 52:54:00:63:7d:56 in network mk-functional-936345
I1219 02:55:27.734582   21586 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/functional-936345/id_rsa Username:docker}
I1219 02:55:27.817995   21586 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936345 ssh pgrep buildkitd: exit status 1 (149.814562ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image build -t localhost/my-image:functional-936345 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-936345 image build -t localhost/my-image:functional-936345 testdata/build --alsologtostderr: (3.206175106s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-936345 image build -t localhost/my-image:functional-936345 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ff04dc2f608
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-936345
--> 049141a6df0
Successfully tagged localhost/my-image:functional-936345
049141a6df0b81265127cf8de9a7c5385ffc090e71580d5c3add8f42e20b198c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-936345 image build -t localhost/my-image:functional-936345 testdata/build --alsologtostderr:
I1219 02:55:28.067271   21608 out.go:360] Setting OutFile to fd 1 ...
I1219 02:55:28.067357   21608 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:55:28.067365   21608 out.go:374] Setting ErrFile to fd 2...
I1219 02:55:28.067369   21608 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:55:28.067555   21608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
I1219 02:55:28.068053   21608 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:55:28.068729   21608 config.go:182] Loaded profile config "functional-936345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:55:28.070484   21608 ssh_runner.go:195] Run: systemctl --version
I1219 02:55:28.072312   21608 main.go:144] libmachine: domain functional-936345 has defined MAC address 52:54:00:63:7d:56 in network mk-functional-936345
I1219 02:55:28.072663   21608 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:63:7d:56", ip: ""} in network mk-functional-936345: {Iface:virbr1 ExpiryTime:2025-12-19 03:46:54 +0000 UTC Type:0 Mac:52:54:00:63:7d:56 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:functional-936345 Clientid:01:52:54:00:63:7d:56}
I1219 02:55:28.072686   21608 main.go:144] libmachine: domain functional-936345 has defined IP address 192.168.39.80 and MAC address 52:54:00:63:7d:56 in network mk-functional-936345
I1219 02:55:28.072853   21608 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/functional-936345/id_rsa Username:docker}
I1219 02:55:28.160260   21608 build_images.go:162] Building image from path: /tmp/build.2976340649.tar
I1219 02:55:28.160332   21608 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1219 02:55:28.172125   21608 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2976340649.tar
I1219 02:55:28.176554   21608 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2976340649.tar: stat -c "%s %y" /var/lib/minikube/build/build.2976340649.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2976340649.tar': No such file or directory
I1219 02:55:28.176595   21608 ssh_runner.go:362] scp /tmp/build.2976340649.tar --> /var/lib/minikube/build/build.2976340649.tar (3072 bytes)
I1219 02:55:28.204746   21608 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2976340649
I1219 02:55:28.215276   21608 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2976340649 -xf /var/lib/minikube/build/build.2976340649.tar
I1219 02:55:28.225293   21608 crio.go:315] Building image: /var/lib/minikube/build/build.2976340649
I1219 02:55:28.225345   21608 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-936345 /var/lib/minikube/build/build.2976340649 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1219 02:55:31.187862   21608 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-936345 /var/lib/minikube/build/build.2976340649 --cgroup-manager=cgroupfs: (2.962493419s)
I1219 02:55:31.187931   21608 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2976340649
I1219 02:55:31.201889   21608 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2976340649.tar
I1219 02:55:31.212385   21608 build_images.go:218] Built localhost/my-image:functional-936345 from /tmp/build.2976340649.tar
I1219 02:55:31.212420   21608 build_images.go:134] succeeded building to: functional-936345
I1219 02:55:31.212426   21608 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-936345
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image load --daemon kicbase/echo-server:functional-936345 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-936345 image load --daemon kicbase/echo-server:functional-936345 --alsologtostderr: (1.060551963s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image load --daemon kicbase/echo-server:functional-936345 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-936345
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image load --daemon kicbase/echo-server:functional-936345 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image save kicbase/echo-server:functional-936345 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image rm kicbase/echo-server:functional-936345 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-936345
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 image save --daemon kicbase/echo-server:functional-936345 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-936345
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 update-context --alsologtostderr -v=2
E1219 02:55:45.208256    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:56:12.893647    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:57:49.625564    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (2.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-936345 service list: (2.410774886s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (2.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (2.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-936345 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-936345 service list -o json: (2.397182674s)
functional_test.go:1504: Took "2.397289193s" to run "out/minikube-linux-amd64 -p functional-936345 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (2.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-936345
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-936345
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-936345
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (205.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1219 03:00:45.212855    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:02:49.626254    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-423720 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m25.000854438s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (205.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-423720 kubectl -- rollout status deployment/busybox: (4.474280996s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- exec busybox-7b57f96db7-5n9tl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- exec busybox-7b57f96db7-qq97h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- exec busybox-7b57f96db7-zcthh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- exec busybox-7b57f96db7-5n9tl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- exec busybox-7b57f96db7-qq97h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- exec busybox-7b57f96db7-zcthh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- exec busybox-7b57f96db7-5n9tl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- exec busybox-7b57f96db7-qq97h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- exec busybox-7b57f96db7-zcthh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- exec busybox-7b57f96db7-5n9tl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- exec busybox-7b57f96db7-5n9tl -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- exec busybox-7b57f96db7-qq97h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- exec busybox-7b57f96db7-qq97h -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- exec busybox-7b57f96db7-zcthh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 kubectl -- exec busybox-7b57f96db7-zcthh -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (43.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-423720 node add --alsologtostderr -v 5: (42.576710975s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (43.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-423720 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp testdata/cp-test.txt ha-423720:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp ha-423720:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1544891538/001/cp-test_ha-423720.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp ha-423720:/home/docker/cp-test.txt ha-423720-m02:/home/docker/cp-test_ha-423720_ha-423720-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m02 "sudo cat /home/docker/cp-test_ha-423720_ha-423720-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp ha-423720:/home/docker/cp-test.txt ha-423720-m03:/home/docker/cp-test_ha-423720_ha-423720-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m03 "sudo cat /home/docker/cp-test_ha-423720_ha-423720-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp ha-423720:/home/docker/cp-test.txt ha-423720-m04:/home/docker/cp-test_ha-423720_ha-423720-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m04 "sudo cat /home/docker/cp-test_ha-423720_ha-423720-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp testdata/cp-test.txt ha-423720-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp ha-423720-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1544891538/001/cp-test_ha-423720-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp ha-423720-m02:/home/docker/cp-test.txt ha-423720:/home/docker/cp-test_ha-423720-m02_ha-423720.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720 "sudo cat /home/docker/cp-test_ha-423720-m02_ha-423720.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp ha-423720-m02:/home/docker/cp-test.txt ha-423720-m03:/home/docker/cp-test_ha-423720-m02_ha-423720-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m03 "sudo cat /home/docker/cp-test_ha-423720-m02_ha-423720-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp ha-423720-m02:/home/docker/cp-test.txt ha-423720-m04:/home/docker/cp-test_ha-423720-m02_ha-423720-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m04 "sudo cat /home/docker/cp-test_ha-423720-m02_ha-423720-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp testdata/cp-test.txt ha-423720-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp ha-423720-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1544891538/001/cp-test_ha-423720-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp ha-423720-m03:/home/docker/cp-test.txt ha-423720:/home/docker/cp-test_ha-423720-m03_ha-423720.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720 "sudo cat /home/docker/cp-test_ha-423720-m03_ha-423720.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp ha-423720-m03:/home/docker/cp-test.txt ha-423720-m02:/home/docker/cp-test_ha-423720-m03_ha-423720-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m02 "sudo cat /home/docker/cp-test_ha-423720-m03_ha-423720-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp ha-423720-m03:/home/docker/cp-test.txt ha-423720-m04:/home/docker/cp-test_ha-423720-m03_ha-423720-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m04 "sudo cat /home/docker/cp-test_ha-423720-m03_ha-423720-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp testdata/cp-test.txt ha-423720-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp ha-423720-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1544891538/001/cp-test_ha-423720-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp ha-423720-m04:/home/docker/cp-test.txt ha-423720:/home/docker/cp-test_ha-423720-m04_ha-423720.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720 "sudo cat /home/docker/cp-test_ha-423720-m04_ha-423720.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp ha-423720-m04:/home/docker/cp-test.txt ha-423720-m02:/home/docker/cp-test_ha-423720-m04_ha-423720-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m02 "sudo cat /home/docker/cp-test_ha-423720-m04_ha-423720-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 cp ha-423720-m04:/home/docker/cp-test.txt ha-423720-m03:/home/docker/cp-test_ha-423720-m04_ha-423720-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 ssh -n ha-423720-m03 "sudo cat /home/docker/cp-test_ha-423720-m04_ha-423720-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (89.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 node stop m02 --alsologtostderr -v 5
E1219 03:04:18.512600    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:18.517868    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:18.528150    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:18.548472    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:18.588847    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:18.669176    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:18.829532    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:19.150154    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:19.790822    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:21.071327    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:23.632593    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:28.753694    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:38.994194    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:59.474502    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-423720 node stop m02 --alsologtostderr -v 5: (1m29.05857468s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-423720 status --alsologtostderr -v 5: exit status 7 (478.938146ms)

                                                
                                                
-- stdout --
	ha-423720
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423720-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-423720-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423720-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 03:05:24.227774   25810 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:05:24.228030   25810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:24.228041   25810 out.go:374] Setting ErrFile to fd 2...
	I1219 03:05:24.228046   25810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:24.228218   25810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:05:24.228365   25810 out.go:368] Setting JSON to false
	I1219 03:05:24.228386   25810 mustload.go:66] Loading cluster: ha-423720
	I1219 03:05:24.228474   25810 notify.go:221] Checking for updates...
	I1219 03:05:24.228735   25810 config.go:182] Loaded profile config "ha-423720": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:24.228750   25810 status.go:174] checking status of ha-423720 ...
	I1219 03:05:24.231055   25810 status.go:371] ha-423720 host status = "Running" (err=<nil>)
	I1219 03:05:24.231071   25810 host.go:66] Checking if "ha-423720" exists ...
	I1219 03:05:24.233759   25810 main.go:144] libmachine: domain ha-423720 has defined MAC address 52:54:00:5a:12:bd in network mk-ha-423720
	I1219 03:05:24.234283   25810 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5a:12:bd", ip: ""} in network mk-ha-423720: {Iface:virbr1 ExpiryTime:2025-12-19 03:59:41 +0000 UTC Type:0 Mac:52:54:00:5a:12:bd Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-423720 Clientid:01:52:54:00:5a:12:bd}
	I1219 03:05:24.234306   25810 main.go:144] libmachine: domain ha-423720 has defined IP address 192.168.39.109 and MAC address 52:54:00:5a:12:bd in network mk-ha-423720
	I1219 03:05:24.234505   25810 host.go:66] Checking if "ha-423720" exists ...
	I1219 03:05:24.234740   25810 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:24.237251   25810 main.go:144] libmachine: domain ha-423720 has defined MAC address 52:54:00:5a:12:bd in network mk-ha-423720
	I1219 03:05:24.237768   25810 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5a:12:bd", ip: ""} in network mk-ha-423720: {Iface:virbr1 ExpiryTime:2025-12-19 03:59:41 +0000 UTC Type:0 Mac:52:54:00:5a:12:bd Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-423720 Clientid:01:52:54:00:5a:12:bd}
	I1219 03:05:24.237794   25810 main.go:144] libmachine: domain ha-423720 has defined IP address 192.168.39.109 and MAC address 52:54:00:5a:12:bd in network mk-ha-423720
	I1219 03:05:24.237967   25810 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/ha-423720/id_rsa Username:docker}
	I1219 03:05:24.325529   25810 ssh_runner.go:195] Run: systemctl --version
	I1219 03:05:24.332718   25810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:24.351749   25810 kubeconfig.go:125] found "ha-423720" server: "https://192.168.39.254:8443"
	I1219 03:05:24.351785   25810 api_server.go:166] Checking apiserver status ...
	I1219 03:05:24.351821   25810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:24.372135   25810 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup
	W1219 03:05:24.384967   25810 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:05:24.385048   25810 ssh_runner.go:195] Run: ls
	I1219 03:05:24.391176   25810 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1219 03:05:24.396171   25810 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1219 03:05:24.396191   25810 status.go:463] ha-423720 apiserver status = Running (err=<nil>)
	I1219 03:05:24.396202   25810 status.go:176] ha-423720 status: &{Name:ha-423720 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 03:05:24.396220   25810 status.go:174] checking status of ha-423720-m02 ...
	I1219 03:05:24.397801   25810 status.go:371] ha-423720-m02 host status = "Stopped" (err=<nil>)
	I1219 03:05:24.397817   25810 status.go:384] host is not running, skipping remaining checks
	I1219 03:05:24.397824   25810 status.go:176] ha-423720-m02 status: &{Name:ha-423720-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 03:05:24.397840   25810 status.go:174] checking status of ha-423720-m03 ...
	I1219 03:05:24.399000   25810 status.go:371] ha-423720-m03 host status = "Running" (err=<nil>)
	I1219 03:05:24.399015   25810 host.go:66] Checking if "ha-423720-m03" exists ...
	I1219 03:05:24.401244   25810 main.go:144] libmachine: domain ha-423720-m03 has defined MAC address 52:54:00:ab:2e:ea in network mk-ha-423720
	I1219 03:05:24.401604   25810 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:2e:ea", ip: ""} in network mk-ha-423720: {Iface:virbr1 ExpiryTime:2025-12-19 04:01:36 +0000 UTC Type:0 Mac:52:54:00:ab:2e:ea Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-423720-m03 Clientid:01:52:54:00:ab:2e:ea}
	I1219 03:05:24.401629   25810 main.go:144] libmachine: domain ha-423720-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:ab:2e:ea in network mk-ha-423720
	I1219 03:05:24.401765   25810 host.go:66] Checking if "ha-423720-m03" exists ...
	I1219 03:05:24.401944   25810 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:24.403747   25810 main.go:144] libmachine: domain ha-423720-m03 has defined MAC address 52:54:00:ab:2e:ea in network mk-ha-423720
	I1219 03:05:24.404053   25810 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:2e:ea", ip: ""} in network mk-ha-423720: {Iface:virbr1 ExpiryTime:2025-12-19 04:01:36 +0000 UTC Type:0 Mac:52:54:00:ab:2e:ea Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-423720-m03 Clientid:01:52:54:00:ab:2e:ea}
	I1219 03:05:24.404084   25810 main.go:144] libmachine: domain ha-423720-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:ab:2e:ea in network mk-ha-423720
	I1219 03:05:24.404201   25810 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/ha-423720-m03/id_rsa Username:docker}
	I1219 03:05:24.483558   25810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:24.503413   25810 kubeconfig.go:125] found "ha-423720" server: "https://192.168.39.254:8443"
	I1219 03:05:24.503446   25810 api_server.go:166] Checking apiserver status ...
	I1219 03:05:24.503490   25810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:24.522321   25810 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1815/cgroup
	W1219 03:05:24.532466   25810 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1815/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:05:24.532532   25810 ssh_runner.go:195] Run: ls
	I1219 03:05:24.537555   25810 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1219 03:05:24.542055   25810 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1219 03:05:24.542074   25810 status.go:463] ha-423720-m03 apiserver status = Running (err=<nil>)
	I1219 03:05:24.542082   25810 status.go:176] ha-423720-m03 status: &{Name:ha-423720-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 03:05:24.542094   25810 status.go:174] checking status of ha-423720-m04 ...
	I1219 03:05:24.543769   25810 status.go:371] ha-423720-m04 host status = "Running" (err=<nil>)
	I1219 03:05:24.543785   25810 host.go:66] Checking if "ha-423720-m04" exists ...
	I1219 03:05:24.546163   25810 main.go:144] libmachine: domain ha-423720-m04 has defined MAC address 52:54:00:16:7f:e7 in network mk-ha-423720
	I1219 03:05:24.546554   25810 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:16:7f:e7", ip: ""} in network mk-ha-423720: {Iface:virbr1 ExpiryTime:2025-12-19 04:03:16 +0000 UTC Type:0 Mac:52:54:00:16:7f:e7 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-423720-m04 Clientid:01:52:54:00:16:7f:e7}
	I1219 03:05:24.546596   25810 main.go:144] libmachine: domain ha-423720-m04 has defined IP address 192.168.39.194 and MAC address 52:54:00:16:7f:e7 in network mk-ha-423720
	I1219 03:05:24.546744   25810 host.go:66] Checking if "ha-423720-m04" exists ...
	I1219 03:05:24.546911   25810 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:24.549098   25810 main.go:144] libmachine: domain ha-423720-m04 has defined MAC address 52:54:00:16:7f:e7 in network mk-ha-423720
	I1219 03:05:24.549510   25810 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:16:7f:e7", ip: ""} in network mk-ha-423720: {Iface:virbr1 ExpiryTime:2025-12-19 04:03:16 +0000 UTC Type:0 Mac:52:54:00:16:7f:e7 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-423720-m04 Clientid:01:52:54:00:16:7f:e7}
	I1219 03:05:24.549533   25810 main.go:144] libmachine: domain ha-423720-m04 has defined IP address 192.168.39.194 and MAC address 52:54:00:16:7f:e7 in network mk-ha-423720
	I1219 03:05:24.549697   25810 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/ha-423720-m04/id_rsa Username:docker}
	I1219 03:05:24.630472   25810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:24.648854   25810 status.go:176] ha-423720-m04 status: &{Name:ha-423720-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (89.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 node start m02 --alsologtostderr -v 5
E1219 03:05:40.434911    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:05:45.208532    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:05:52.679844    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-423720 node start m02 --alsologtostderr -v 5: (35.725342521s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (36.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (374.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 stop --alsologtostderr -v 5
E1219 03:07:02.357219    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:08.256030    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:49.625828    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:18.513999    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:46.197528    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-423720 stop --alsologtostderr -v 5: (4m20.742999554s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 start --wait true --alsologtostderr -v 5
E1219 03:10:45.207751    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-423720 start --wait true --alsologtostderr -v 5: (1m53.925795855s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (374.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-423720 node delete m03 --alsologtostderr -v 5: (17.214516286s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (252.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 stop --alsologtostderr -v 5
E1219 03:12:49.625526    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:14:18.513842    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:15:45.210692    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-423720 stop --alsologtostderr -v 5: (4m12.338887129s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-423720 status --alsologtostderr -v 5: exit status 7 (58.407034ms)

                                                
                                                
-- stdout --
	ha-423720
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-423720-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-423720-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 03:16:47.874522   29057 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:16:47.874662   29057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:16:47.874672   29057 out.go:374] Setting ErrFile to fd 2...
	I1219 03:16:47.874677   29057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:16:47.874854   29057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:16:47.875024   29057 out.go:368] Setting JSON to false
	I1219 03:16:47.875053   29057 mustload.go:66] Loading cluster: ha-423720
	I1219 03:16:47.875186   29057 notify.go:221] Checking for updates...
	I1219 03:16:47.875463   29057 config.go:182] Loaded profile config "ha-423720": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:16:47.875479   29057 status.go:174] checking status of ha-423720 ...
	I1219 03:16:47.877311   29057 status.go:371] ha-423720 host status = "Stopped" (err=<nil>)
	I1219 03:16:47.877325   29057 status.go:384] host is not running, skipping remaining checks
	I1219 03:16:47.877332   29057 status.go:176] ha-423720 status: &{Name:ha-423720 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 03:16:47.877347   29057 status.go:174] checking status of ha-423720-m02 ...
	I1219 03:16:47.878357   29057 status.go:371] ha-423720-m02 host status = "Stopped" (err=<nil>)
	I1219 03:16:47.878370   29057 status.go:384] host is not running, skipping remaining checks
	I1219 03:16:47.878375   29057 status.go:176] ha-423720-m02 status: &{Name:ha-423720-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 03:16:47.878393   29057 status.go:174] checking status of ha-423720-m04 ...
	I1219 03:16:47.879361   29057 status.go:371] ha-423720-m04 host status = "Stopped" (err=<nil>)
	I1219 03:16:47.879373   29057 status.go:384] host is not running, skipping remaining checks
	I1219 03:16:47.879378   29057 status.go:176] ha-423720-m04 status: &{Name:ha-423720-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (252.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (93.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1219 03:17:49.626000    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-423720 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m33.268294541s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (93.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (100.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 node add --control-plane --alsologtostderr -v 5
E1219 03:19:18.512903    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-423720 node add --control-plane --alsologtostderr -v 5: (1m39.938479214s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-423720 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (100.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-068807 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1219 03:20:41.559419    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:20:45.210401    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-068807 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m18.785747765s)
--- PASS: TestJSONOutput/start/Command (78.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-068807 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-068807 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-068807 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-068807 --output=json --user=testUser: (6.750725032s)
--- PASS: TestJSONOutput/stop/Command (6.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-636255 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-636255 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (71.347153ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db413a76-839d-4a2d-82a8-72ac65e8a020","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-636255] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b479a8b-8e55-4a23-9e1c-193054aa6980","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22230"}}
	{"specversion":"1.0","id":"9fecf613-98e2-4b2e-8ede-efe754be1801","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9bf304a1-79d5-4e09-a5b4-b998cdad97f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig"}}
	{"specversion":"1.0","id":"33288855-2895-4593-8e9b-11aa50360616","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube"}}
	{"specversion":"1.0","id":"8f5e397d-6724-4cc9-b08b-e77d635e43a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e7cb2eaa-abc4-45c0-8745-60034f82df1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a3d34ca5-24d1-4c5d-a55c-018be170654b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-636255" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-636255
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (74s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-424462 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-424462 --driver=kvm2  --container-runtime=crio: (35.118448984s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-426892 --driver=kvm2  --container-runtime=crio
E1219 03:22:32.682351    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-426892 --driver=kvm2  --container-runtime=crio: (36.413125102s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-424462
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-426892
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-426892" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-426892
helpers_test.go:176: Cleaning up "first-424462" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-424462
--- PASS: TestMinikubeProfile (74.00s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-950736 --memory=3072 --mount-string /tmp/TestMountStartserial1600025662/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1219 03:22:49.625404    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-950736 --memory=3072 --mount-string /tmp/TestMountStartserial1600025662/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.737856066s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-950736 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-950736 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-963218 --memory=3072 --mount-string /tmp/TestMountStartserial1600025662/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-963218 --memory=3072 --mount-string /tmp/TestMountStartserial1600025662/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.330736726s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-963218 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-963218 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-950736 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-963218 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-963218 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-963218
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-963218: (1.178518087s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.58s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-963218
E1219 03:23:48.257718    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-963218: (17.574036881s)
--- PASS: TestMountStart/serial/RestartStopped (18.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-963218 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-963218 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-154692 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1219 03:24:18.512960    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-154692 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m39.319149579s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-154692 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-154692 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-154692 -- rollout status deployment/busybox: (4.779358161s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-154692 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-154692 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-154692 -- exec busybox-7b57f96db7-ptw4f -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-154692 -- exec busybox-7b57f96db7-x6hq8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-154692 -- exec busybox-7b57f96db7-ptw4f -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-154692 -- exec busybox-7b57f96db7-x6hq8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-154692 -- exec busybox-7b57f96db7-ptw4f -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-154692 -- exec busybox-7b57f96db7-x6hq8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.27s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-154692 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-154692 -- exec busybox-7b57f96db7-ptw4f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-154692 -- exec busybox-7b57f96db7-ptw4f -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-154692 -- exec busybox-7b57f96db7-x6hq8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-154692 -- exec busybox-7b57f96db7-x6hq8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-154692 -v=5 --alsologtostderr
E1219 03:25:45.208292    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-154692 -v=5 --alsologtostderr: (41.090734298s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.50s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-154692 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 cp testdata/cp-test.txt multinode-154692:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 cp multinode-154692:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2314616443/001/cp-test_multinode-154692.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 cp multinode-154692:/home/docker/cp-test.txt multinode-154692-m02:/home/docker/cp-test_multinode-154692_multinode-154692-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692-m02 "sudo cat /home/docker/cp-test_multinode-154692_multinode-154692-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 cp multinode-154692:/home/docker/cp-test.txt multinode-154692-m03:/home/docker/cp-test_multinode-154692_multinode-154692-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692-m03 "sudo cat /home/docker/cp-test_multinode-154692_multinode-154692-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 cp testdata/cp-test.txt multinode-154692-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 cp multinode-154692-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2314616443/001/cp-test_multinode-154692-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 cp multinode-154692-m02:/home/docker/cp-test.txt multinode-154692:/home/docker/cp-test_multinode-154692-m02_multinode-154692.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692 "sudo cat /home/docker/cp-test_multinode-154692-m02_multinode-154692.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 cp multinode-154692-m02:/home/docker/cp-test.txt multinode-154692-m03:/home/docker/cp-test_multinode-154692-m02_multinode-154692-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692-m03 "sudo cat /home/docker/cp-test_multinode-154692-m02_multinode-154692-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 cp testdata/cp-test.txt multinode-154692-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 cp multinode-154692-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2314616443/001/cp-test_multinode-154692-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 cp multinode-154692-m03:/home/docker/cp-test.txt multinode-154692:/home/docker/cp-test_multinode-154692-m03_multinode-154692.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692 "sudo cat /home/docker/cp-test_multinode-154692-m03_multinode-154692.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 cp multinode-154692-m03:/home/docker/cp-test.txt multinode-154692-m02:/home/docker/cp-test_multinode-154692-m03_multinode-154692-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 ssh -n multinode-154692-m02 "sudo cat /home/docker/cp-test_multinode-154692-m03_multinode-154692-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-154692 node stop m03: (1.439275733s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-154692 status: exit status 7 (302.378607ms)

                                                
                                                
-- stdout --
	multinode-154692
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-154692-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-154692-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-154692 status --alsologtostderr: exit status 7 (308.100857ms)

                                                
                                                
-- stdout --
	multinode-154692
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-154692-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-154692-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 03:26:26.923128   34996 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:26:26.923232   34996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:26:26.923239   34996 out.go:374] Setting ErrFile to fd 2...
	I1219 03:26:26.923246   34996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:26:26.923489   34996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:26:26.923680   34996 out.go:368] Setting JSON to false
	I1219 03:26:26.923710   34996 mustload.go:66] Loading cluster: multinode-154692
	I1219 03:26:26.923844   34996 notify.go:221] Checking for updates...
	I1219 03:26:26.924156   34996 config.go:182] Loaded profile config "multinode-154692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:26:26.924176   34996 status.go:174] checking status of multinode-154692 ...
	I1219 03:26:26.926260   34996 status.go:371] multinode-154692 host status = "Running" (err=<nil>)
	I1219 03:26:26.926276   34996 host.go:66] Checking if "multinode-154692" exists ...
	I1219 03:26:26.928840   34996 main.go:144] libmachine: domain multinode-154692 has defined MAC address 52:54:00:fc:73:88 in network mk-multinode-154692
	I1219 03:26:26.929305   34996 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fc:73:88", ip: ""} in network mk-multinode-154692: {Iface:virbr1 ExpiryTime:2025-12-19 04:24:04 +0000 UTC Type:0 Mac:52:54:00:fc:73:88 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:multinode-154692 Clientid:01:52:54:00:fc:73:88}
	I1219 03:26:26.929341   34996 main.go:144] libmachine: domain multinode-154692 has defined IP address 192.168.39.33 and MAC address 52:54:00:fc:73:88 in network mk-multinode-154692
	I1219 03:26:26.929504   34996 host.go:66] Checking if "multinode-154692" exists ...
	I1219 03:26:26.929736   34996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:26:26.932039   34996 main.go:144] libmachine: domain multinode-154692 has defined MAC address 52:54:00:fc:73:88 in network mk-multinode-154692
	I1219 03:26:26.932555   34996 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fc:73:88", ip: ""} in network mk-multinode-154692: {Iface:virbr1 ExpiryTime:2025-12-19 04:24:04 +0000 UTC Type:0 Mac:52:54:00:fc:73:88 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:multinode-154692 Clientid:01:52:54:00:fc:73:88}
	I1219 03:26:26.932601   34996 main.go:144] libmachine: domain multinode-154692 has defined IP address 192.168.39.33 and MAC address 52:54:00:fc:73:88 in network mk-multinode-154692
	I1219 03:26:26.932744   34996 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/multinode-154692/id_rsa Username:docker}
	I1219 03:26:27.009818   34996 ssh_runner.go:195] Run: systemctl --version
	I1219 03:26:27.016034   34996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:26:27.032687   34996 kubeconfig.go:125] found "multinode-154692" server: "https://192.168.39.33:8443"
	I1219 03:26:27.032713   34996 api_server.go:166] Checking apiserver status ...
	I1219 03:26:27.032743   34996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:26:27.051827   34996 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W1219 03:26:27.063107   34996 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:26:27.063157   34996 ssh_runner.go:195] Run: ls
	I1219 03:26:27.067793   34996 api_server.go:253] Checking apiserver healthz at https://192.168.39.33:8443/healthz ...
	I1219 03:26:27.073831   34996 api_server.go:279] https://192.168.39.33:8443/healthz returned 200:
	ok
	I1219 03:26:27.073855   34996 status.go:463] multinode-154692 apiserver status = Running (err=<nil>)
	I1219 03:26:27.073866   34996 status.go:176] multinode-154692 status: &{Name:multinode-154692 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 03:26:27.073892   34996 status.go:174] checking status of multinode-154692-m02 ...
	I1219 03:26:27.075608   34996 status.go:371] multinode-154692-m02 host status = "Running" (err=<nil>)
	I1219 03:26:27.075627   34996 host.go:66] Checking if "multinode-154692-m02" exists ...
	I1219 03:26:27.078219   34996 main.go:144] libmachine: domain multinode-154692-m02 has defined MAC address 52:54:00:2a:b6:fe in network mk-multinode-154692
	I1219 03:26:27.078765   34996 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:b6:fe", ip: ""} in network mk-multinode-154692: {Iface:virbr1 ExpiryTime:2025-12-19 04:24:59 +0000 UTC Type:0 Mac:52:54:00:2a:b6:fe Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-154692-m02 Clientid:01:52:54:00:2a:b6:fe}
	I1219 03:26:27.078826   34996 main.go:144] libmachine: domain multinode-154692-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:2a:b6:fe in network mk-multinode-154692
	I1219 03:26:27.079012   34996 host.go:66] Checking if "multinode-154692-m02" exists ...
	I1219 03:26:27.079287   34996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:26:27.081392   34996 main.go:144] libmachine: domain multinode-154692-m02 has defined MAC address 52:54:00:2a:b6:fe in network mk-multinode-154692
	I1219 03:26:27.081747   34996 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:b6:fe", ip: ""} in network mk-multinode-154692: {Iface:virbr1 ExpiryTime:2025-12-19 04:24:59 +0000 UTC Type:0 Mac:52:54:00:2a:b6:fe Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-154692-m02 Clientid:01:52:54:00:2a:b6:fe}
	I1219 03:26:27.081770   34996 main.go:144] libmachine: domain multinode-154692-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:2a:b6:fe in network mk-multinode-154692
	I1219 03:26:27.081898   34996 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5010/.minikube/machines/multinode-154692-m02/id_rsa Username:docker}
	I1219 03:26:27.157823   34996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:26:27.173622   34996 status.go:176] multinode-154692-m02 status: &{Name:multinode-154692-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1219 03:26:27.173662   34996 status.go:174] checking status of multinode-154692-m03 ...
	I1219 03:26:27.175412   34996 status.go:371] multinode-154692-m03 host status = "Stopped" (err=<nil>)
	I1219 03:26:27.175427   34996 status.go:384] host is not running, skipping remaining checks
	I1219 03:26:27.175432   34996 status.go:176] multinode-154692-m03 status: &{Name:multinode-154692-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.05s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-154692 node start m03 -v=5 --alsologtostderr: (37.478802112s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (287.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-154692
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-154692
E1219 03:27:49.625891    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:18.514241    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-154692: (2m43.308368578s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-154692 --wait=true -v=5 --alsologtostderr
E1219 03:30:45.208271    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-154692 --wait=true -v=5 --alsologtostderr: (2m4.028193456s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-154692
--- PASS: TestMultiNode/serial/RestartKeepsNodes (287.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-154692 node delete m03: (2.014145268s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (162.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 stop
E1219 03:32:49.625753    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:18.514179    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-154692 stop: (2m42.16692853s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-154692 status: exit status 7 (59.490562ms)

                                                
                                                
-- stdout --
	multinode-154692
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-154692-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-154692 status --alsologtostderr: exit status 7 (57.74161ms)

                                                
                                                
-- stdout --
	multinode-154692
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-154692-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 03:34:37.313519   37278 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:34:37.313641   37278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:34:37.313652   37278 out.go:374] Setting ErrFile to fd 2...
	I1219 03:34:37.313658   37278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:34:37.313860   37278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:34:37.314050   37278 out.go:368] Setting JSON to false
	I1219 03:34:37.314078   37278 mustload.go:66] Loading cluster: multinode-154692
	I1219 03:34:37.314206   37278 notify.go:221] Checking for updates...
	I1219 03:34:37.314448   37278 config.go:182] Loaded profile config "multinode-154692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:34:37.314464   37278 status.go:174] checking status of multinode-154692 ...
	I1219 03:34:37.316516   37278 status.go:371] multinode-154692 host status = "Stopped" (err=<nil>)
	I1219 03:34:37.316534   37278 status.go:384] host is not running, skipping remaining checks
	I1219 03:34:37.316540   37278 status.go:176] multinode-154692 status: &{Name:multinode-154692 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 03:34:37.316557   37278 status.go:174] checking status of multinode-154692-m02 ...
	I1219 03:34:37.317705   37278 status.go:371] multinode-154692-m02 host status = "Stopped" (err=<nil>)
	I1219 03:34:37.317717   37278 status.go:384] host is not running, skipping remaining checks
	I1219 03:34:37.317721   37278 status.go:176] multinode-154692-m02 status: &{Name:multinode-154692-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (162.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-154692 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1219 03:35:45.208000    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-154692 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m26.283285908s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-154692 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-154692
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-154692-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-154692-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (68.55414ms)

                                                
                                                
-- stdout --
	* [multinode-154692-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-154692-m02' is duplicated with machine name 'multinode-154692-m02' in profile 'multinode-154692'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-154692-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-154692-m03 --driver=kvm2  --container-runtime=crio: (39.308824201s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-154692
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-154692: exit status 80 (199.231312ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-154692 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-154692-m03 already exists in multinode-154692-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-154692-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.42s)

                                                
                                    
x
+
TestScheduledStopUnix (107.32s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-434992 --memory=3072 --driver=kvm2  --container-runtime=crio
E1219 03:39:18.514616    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-434992 --memory=3072 --driver=kvm2  --container-runtime=crio: (35.798960042s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-434992 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1219 03:39:48.631014   39595 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:39:48.631144   39595 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:39:48.631154   39595 out.go:374] Setting ErrFile to fd 2...
	I1219 03:39:48.631161   39595 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:39:48.631347   39595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:39:48.631617   39595 out.go:368] Setting JSON to false
	I1219 03:39:48.631715   39595 mustload.go:66] Loading cluster: scheduled-stop-434992
	I1219 03:39:48.632007   39595 config.go:182] Loaded profile config "scheduled-stop-434992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:39:48.632079   39595 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/config.json ...
	I1219 03:39:48.632259   39595 mustload.go:66] Loading cluster: scheduled-stop-434992
	I1219 03:39:48.632384   39595 config.go:182] Loaded profile config "scheduled-stop-434992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-434992 -n scheduled-stop-434992
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-434992 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1219 03:39:48.911341   39640 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:39:48.911611   39640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:39:48.911620   39640 out.go:374] Setting ErrFile to fd 2...
	I1219 03:39:48.911625   39640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:39:48.911812   39640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:39:48.912037   39640 out.go:368] Setting JSON to false
	I1219 03:39:48.912318   39640 daemonize_unix.go:73] killing process 39629 as it is an old scheduled stop
	I1219 03:39:48.912431   39640 mustload.go:66] Loading cluster: scheduled-stop-434992
	I1219 03:39:48.912891   39640 config.go:182] Loaded profile config "scheduled-stop-434992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:39:48.913010   39640 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/config.json ...
	I1219 03:39:48.913255   39640 mustload.go:66] Loading cluster: scheduled-stop-434992
	I1219 03:39:48.913424   39640 config.go:182] Loaded profile config "scheduled-stop-434992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1219 03:39:48.917447    8937 retry.go:31] will retry after 73.892µs: open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/pid: no such file or directory
I1219 03:39:48.918626    8937 retry.go:31] will retry after 224.823µs: open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/pid: no such file or directory
I1219 03:39:48.919787    8937 retry.go:31] will retry after 249.887µs: open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/pid: no such file or directory
I1219 03:39:48.920920    8937 retry.go:31] will retry after 236.668µs: open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/pid: no such file or directory
I1219 03:39:48.922060    8937 retry.go:31] will retry after 464.059µs: open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/pid: no such file or directory
I1219 03:39:48.923159    8937 retry.go:31] will retry after 1.130017ms: open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/pid: no such file or directory
I1219 03:39:48.925383    8937 retry.go:31] will retry after 1.414745ms: open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/pid: no such file or directory
I1219 03:39:48.927611    8937 retry.go:31] will retry after 1.63733ms: open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/pid: no such file or directory
I1219 03:39:48.929838    8937 retry.go:31] will retry after 3.217279ms: open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/pid: no such file or directory
I1219 03:39:48.934025    8937 retry.go:31] will retry after 4.744218ms: open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/pid: no such file or directory
I1219 03:39:48.939205    8937 retry.go:31] will retry after 7.563304ms: open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/pid: no such file or directory
I1219 03:39:48.947465    8937 retry.go:31] will retry after 10.747549ms: open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/pid: no such file or directory
I1219 03:39:48.959163    8937 retry.go:31] will retry after 8.401022ms: open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/pid: no such file or directory
I1219 03:39:48.968410    8937 retry.go:31] will retry after 29.025638ms: open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/pid: no such file or directory
I1219 03:39:48.997577    8937 retry.go:31] will retry after 28.374178ms: open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-434992 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-434992 -n scheduled-stop-434992
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-434992
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-434992 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1219 03:40:14.579320   39805 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:40:14.579587   39805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:40:14.579598   39805 out.go:374] Setting ErrFile to fd 2...
	I1219 03:40:14.579601   39805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:40:14.579814   39805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:40:14.580084   39805 out.go:368] Setting JSON to false
	I1219 03:40:14.580173   39805 mustload.go:66] Loading cluster: scheduled-stop-434992
	I1219 03:40:14.580500   39805 config.go:182] Loaded profile config "scheduled-stop-434992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:40:14.580595   39805 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/scheduled-stop-434992/config.json ...
	I1219 03:40:14.580797   39805 mustload.go:66] Loading cluster: scheduled-stop-434992
	I1219 03:40:14.580915   39805 config.go:182] Loaded profile config "scheduled-stop-434992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
E1219 03:40:28.260679    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1219 03:40:45.212035    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-434992
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-434992: exit status 7 (56.908704ms)

                                                
                                                
-- stdout --
	scheduled-stop-434992
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-434992 -n scheduled-stop-434992
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-434992 -n scheduled-stop-434992: exit status 7 (57.27327ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-434992" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-434992
--- PASS: TestScheduledStopUnix (107.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (380.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2561108728 start -p running-upgrade-964792 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2561108728 start -p running-upgrade-964792 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m17.314889087s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-964792 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-964792 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (4m59.193952783s)
helpers_test.go:176: Cleaning up "running-upgrade-964792" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-964792
--- PASS: TestRunningBinaryUpgrade (380.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (158.67s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-061737 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-061737 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.173175038s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-061737
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-061737: (2.492181973s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-061737 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-061737 status --format={{.Host}}: exit status 7 (58.169597ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-061737 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-061737 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.506520351s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-061737 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-061737 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-061737 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (80.996348ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-061737] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-061737
	    minikube start -p kubernetes-upgrade-061737 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0617372 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-061737 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-061737 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1219 03:42:49.626396    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-061737 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.257480657s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-061737" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-061737
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-061737: (1.03576261s)
--- PASS: TestKubernetesUpgrade (158.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (114.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2275385765 start -p stopped-upgrade-291901 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2275385765 start -p stopped-upgrade-291901 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m9.958791787s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2275385765 -p stopped-upgrade-291901 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2275385765 -p stopped-upgrade-291901 stop: (1.931641947s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-291901 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-291901 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.237659466s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (114.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-542624 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-542624 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (109.449583ms)

                                                
                                                
-- stdout --
	* [false-542624] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 03:41:03.501278   40856 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:41:03.501392   40856 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:41:03.501401   40856 out.go:374] Setting ErrFile to fd 2...
	I1219 03:41:03.501406   40856 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:41:03.501632   40856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5010/.minikube/bin
	I1219 03:41:03.502087   40856 out.go:368] Setting JSON to false
	I1219 03:41:03.502930   40856 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5007,"bootTime":1766110656,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:41:03.502976   40856 start.go:143] virtualization: kvm guest
	I1219 03:41:03.504515   40856 out.go:179] * [false-542624] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:41:03.505997   40856 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:41:03.506007   40856 notify.go:221] Checking for updates...
	I1219 03:41:03.508056   40856 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:41:03.509171   40856 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	I1219 03:41:03.510274   40856 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	I1219 03:41:03.511694   40856 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:41:03.512860   40856 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:41:03.514671   40856 config.go:182] Loaded profile config "kubernetes-upgrade-061737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1219 03:41:03.514828   40856 config.go:182] Loaded profile config "offline-crio-052125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:41:03.514959   40856 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:41:03.549659   40856 out.go:179] * Using the kvm2 driver based on user configuration
	I1219 03:41:03.550614   40856 start.go:309] selected driver: kvm2
	I1219 03:41:03.550626   40856 start.go:928] validating driver "kvm2" against <nil>
	I1219 03:41:03.550637   40856 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:41:03.552202   40856 out.go:203] 
	W1219 03:41:03.553124   40856 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1219 03:41:03.553944   40856 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-542624 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-542624

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-542624

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-542624

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-542624

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-542624

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-542624

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-542624

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-542624

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-542624

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-542624

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-542624

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-542624" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-542624" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-542624

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-542624"

                                                
                                                
----------------------- debugLogs end: false-542624 [took: 3.229348105s] --------------------------------
helpers_test.go:176: Cleaning up "false-542624" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-542624
--- PASS: TestNetworkPlugins/group/false (3.51s)

                                                
                                    
x
+
TestPause/serial/Start (90.64s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-813136 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-813136 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m30.634913571s)
--- PASS: TestPause/serial/Start (90.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-291901
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-291901: (1.09939306s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-982841 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-982841 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (82.263482ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-982841] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (55.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-982841 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-982841 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (55.332020084s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-982841 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (55.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-982841 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-982841 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (4.638628982s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-982841 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-982841 status -o json: exit status 2 (209.805088ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-982841","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-982841
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (5.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (18.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-982841 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-982841 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (18.717629469s)
--- PASS: TestNoKubernetes/serial/Start (18.72s)

                                                
                                    
x
+
TestISOImage/Setup (29.39s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-783207 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1219 03:44:18.512671    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-783207 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.3867111s)
--- PASS: TestISOImage/Setup (29.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22230-5010/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-982841 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-982841 "sudo systemctl is-active --quiet service kubelet": exit status 1 (148.137395ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-982841
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-982841: (1.218905938s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (40.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-982841 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-982841 --driver=kvm2  --container-runtime=crio: (40.897368451s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (40.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-982841 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-982841 "sudo systemctl is-active --quiet service kubelet": exit status 1 (169.954895ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (70.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-542624 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E1219 03:45:45.207757    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-542624 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m10.113470893s)
--- PASS: TestNetworkPlugins/group/auto/Start (70.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-542624 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-542624 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m1.699147322s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-542624 "pgrep -a kubelet"
I1219 03:46:53.239970    8937 config.go:182] Loaded profile config "auto-542624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-542624 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-z2clk" [c97db9eb-cfbd-4821-ad28-65aa1dda748f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-z2clk" [c97db9eb-cfbd-4821-ad28-65aa1dda748f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004491751s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-542624 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-542624 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-542624 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-jbbrl" [3d219d4b-9ebc-493a-8442-e2ebb6edeb6b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005538767s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-542624 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-542624 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m15.416278701s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-542624 "pgrep -a kubelet"
I1219 03:47:24.596006    8937 config.go:182] Loaded profile config "kindnet-542624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-542624 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5ct5r" [83f96f2a-8cb8-4459-9127-56d0971fbeaf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-5ct5r" [83f96f2a-8cb8-4459-9127-56d0971fbeaf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004553858s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (112.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-542624 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-542624 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m52.708596113s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (112.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-542624 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-542624 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-542624 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-542624 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-542624 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m30.528154396s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-v6s98" [a8432cf4-c44c-4722-9086-5d1a008deefd] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-v6s98" [a8432cf4-c44c-4722-9086-5d1a008deefd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004719363s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-542624 "pgrep -a kubelet"
I1219 03:48:40.058264    8937 config.go:182] Loaded profile config "calico-542624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-542624 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-9828t" [a740980f-fe80-4fef-8b2c-fd5c308985dc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-9828t" [a740980f-fe80-4fef-8b2c-fd5c308985dc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003669521s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-542624 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-542624 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-542624 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (78.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-542624 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1219 03:49:18.512227    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-542624 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m18.586982659s)
--- PASS: TestNetworkPlugins/group/flannel/Start (78.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-542624 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-542624 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m29.875206656s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-542624 "pgrep -a kubelet"
I1219 03:49:22.935740    8937 config.go:182] Loaded profile config "enable-default-cni-542624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-542624 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5q4vd" [2c6e915b-c4f7-4a0f-91fc-8ee95070a9ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-5q4vd" [2c6e915b-c4f7-4a0f-91fc-8ee95070a9ae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003683569s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-542624 "pgrep -a kubelet"
I1219 03:49:24.351128    8937 config.go:182] Loaded profile config "custom-flannel-542624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-542624 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5lcbl" [75efdb8b-1129-4ac9-91c4-8d82b557ef73] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-5lcbl" [75efdb8b-1129-4ac9-91c4-8d82b557ef73] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00534072s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-542624 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-542624 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-542624 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-542624 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-542624 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-542624 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (96.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-094166 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-094166 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m36.459435924s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (96.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (107.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-298059 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-298059 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m47.288154098s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (107.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-bkn9n" [31e4e3a4-d787-4bf7-bc5e-0a46d120128c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004893657s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-542624 "pgrep -a kubelet"
I1219 03:50:31.117049    8937 config.go:182] Loaded profile config "flannel-542624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-542624 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-9cwbx" [0255fcf4-b429-493a-abfa-4eab3fb06e2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-9cwbx" [0255fcf4-b429-493a-abfa-4eab3fb06e2b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005042965s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-542624 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-542624 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-542624 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-542624 "pgrep -a kubelet"
I1219 03:50:51.164915    8937 config.go:182] Loaded profile config "bridge-542624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-542624 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-6bprn" [7a030449-338b-45ca-95d5-7d14435c87f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-6bprn" [7a030449-338b-45ca-95d5-7d14435c87f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004545544s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-244717 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-244717 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (1m25.821024827s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-542624 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-542624 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-542624 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (1m21.357802251s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-094166 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [e1fc5999-4caf-496d-a302-707570e1f019] Pending
helpers_test.go:353: "busybox" [e1fc5999-4caf-496d-a302-707570e1f019] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [e1fc5999-4caf-496d-a302-707570e1f019] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.004339599s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-094166 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-094166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-094166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.030259255s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-094166 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (83.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-094166 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-094166 --alsologtostderr -v=3: (1m23.394000163s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (83.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-298059 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [9d5b4f64-027d-4358-ad35-d6f5cf456210] Pending
helpers_test.go:353: "busybox" [9d5b4f64-027d-4358-ad35-d6f5cf456210] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [9d5b4f64-027d-4358-ad35-d6f5cf456210] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.00468237s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-298059 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-298059 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-298059 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.035617213s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-298059 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (72.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-298059 --alsologtostderr -v=3
E1219 03:51:53.474291    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:51:53.479662    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:51:53.490031    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:51:53.510394    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:51:53.550738    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:51:53.631837    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:51:53.792317    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:51:54.113470    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:51:54.754284    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:51:56.034675    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:51:58.595332    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:03.716498    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:13.957134    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:18.385224    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:18.390524    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:18.400788    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:18.421063    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:18.461465    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:18.541945    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:18.702668    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:19.023303    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:19.664139    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:20.944940    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:23.505820    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-298059 --alsologtostderr -v=3: (1m12.934310607s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (72.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-244717 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5641715a-fb85-45c8-b1e2-de3c394086ed] Pending
helpers_test.go:353: "busybox" [5641715a-fb85-45c8-b1e2-de3c394086ed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5641715a-fb85-45c8-b1e2-de3c394086ed] Running
E1219 03:52:28.626501    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004033034s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-244717 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-244717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-244717 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (78.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-244717 --alsologtostderr -v=3
E1219 03:52:34.437935    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:38.867709    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-244717 --alsologtostderr -v=3: (1m18.713287884s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (78.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-168174 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [ca7cb580-0d7c-401c-8d64-c5bb86760477] Pending
helpers_test.go:353: "busybox" [ca7cb580-0d7c-401c-8d64-c5bb86760477] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [ca7cb580-0d7c-401c-8d64-c5bb86760477] Running
E1219 03:52:49.625921    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004222122s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-168174 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-168174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-168174 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (87.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-168174 --alsologtostderr -v=3
E1219 03:52:59.348033    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-168174 --alsologtostderr -v=3: (1m27.52950223s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (87.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094166 -n old-k8s-version-094166
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094166 -n old-k8s-version-094166: exit status 7 (54.68682ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-094166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-094166 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-094166 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (51.596699305s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094166 -n old-k8s-version-094166
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298059 -n no-preload-298059
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298059 -n no-preload-298059: exit status 7 (58.566614ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-298059 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (416.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-298059 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1219 03:53:15.398459    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:53:33.872947    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:53:33.878287    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:53:33.888601    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:53:33.908955    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:53:33.949289    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:53:34.030122    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:53:34.190674    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:53:34.511524    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:53:35.152639    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:53:36.433691    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:53:38.994832    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:53:40.308974    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:53:44.115161    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-298059 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (6m56.528388554s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298059 -n no-preload-298059
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (416.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-244717 -n embed-certs-244717
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-244717 -n embed-certs-244717: exit status 7 (67.406148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-244717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (398.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-244717 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
E1219 03:53:54.356182    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-244717 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (6m38.586000926s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-244717 -n embed-certs-244717
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (398.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-168174 -n default-k8s-diff-port-168174
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-168174 -n default-k8s-diff-port-168174: exit status 7 (68.491815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-168174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (401.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
E1219 03:54:23.144507    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:23.149877    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:23.160244    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:23.180600    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:23.220843    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:23.301277    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:23.461968    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:23.782385    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:24.423004    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:24.605709    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:24.611089    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:24.621394    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:24.641771    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:24.682122    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:24.762616    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:24.922987    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:25.243161    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:25.704046    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:25.883397    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:27.163963    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:28.264423    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:29.725046    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:33.384881    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:34.846054    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:37.319370    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:43.625360    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:45.086985    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:55.797127    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:02.229267    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:04.106625    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:05.567863    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:24.934353    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:24.939660    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:24.949926    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:24.970255    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:25.010685    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:25.091769    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:25.252348    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:25.573300    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:26.213634    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:27.494551    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:30.055293    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:35.176244    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:45.066881    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:45.208204    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:45.416471    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:46.528249    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:51.406042    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:51.411425    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:51.421711    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:51.441953    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:51.482305    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:51.563093    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:51.723518    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:52.044300    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:52.683958    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:52.685079    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:53.966060    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:55:56.527211    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:56:01.647904    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:56:05.897284    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:56:11.888195    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:56:17.717996    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:56:32.369418    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:56:46.858439    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:56:53.473822    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:57:06.988041    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:57:08.261265    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:57:08.448850    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:57:13.330384    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:57:18.384990    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:57:21.159999    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:57:46.069642    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:57:49.626185    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:58:08.779514    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:58:33.873267    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:58:35.250612    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:59:01.558306    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:59:18.511936    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:59:23.143728    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:59:24.606318    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:59:50.828374    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:59:52.289020    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-168174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (6m41.085278736s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-168174 -n default-k8s-diff-port-168174
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (401.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-094166 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: library/kong:3.9
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-094166 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094166 -n old-k8s-version-094166
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094166 -n old-k8s-version-094166: exit status 2 (212.933326ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-094166 -n old-k8s-version-094166
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-094166 -n old-k8s-version-094166: exit status 2 (216.144384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-094166 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094166 -n old-k8s-version-094166
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-094166 -n old-k8s-version-094166
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-509532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1219 04:12:14.451616    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:12:18.385718    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:12:32.684841    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-509532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (38.672137095s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-509532 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (81.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-509532 --alsologtostderr -v=3
E1219 04:12:49.625757    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:13:33.873138    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/calico-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:13:48.261832    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-509532 --alsologtostderr -v=3: (1m21.264553721s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (81.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-509532 -n newest-cni-509532
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-509532 -n newest-cni-509532: exit status 7 (63.83361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-509532 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (394.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-509532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1219 04:14:18.512399    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-936345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:14:23.143940    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/enable-default-cni-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:14:24.606325    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/custom-flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:15:24.934051    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/flannel-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:15:45.207870    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/functional-199791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:15:51.406298    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/bridge-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:16:26.995803    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/old-k8s-version-094166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:16:27.001147    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/old-k8s-version-094166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:16:27.011463    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/old-k8s-version-094166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:16:27.031811    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/old-k8s-version-094166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:16:27.072178    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/old-k8s-version-094166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:16:27.152533    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/old-k8s-version-094166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:16:27.313118    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/old-k8s-version-094166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:16:27.633793    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/old-k8s-version-094166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:16:28.274394    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/old-k8s-version-094166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:16:29.554693    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/old-k8s-version-094166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:16:32.115152    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/old-k8s-version-094166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:16:37.236335    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/old-k8s-version-094166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:16:47.476703    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/old-k8s-version-094166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:16:53.474033    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/auto-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:17:07.957617    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/old-k8s-version-094166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:17:18.385449    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/kindnet-542624/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:17:48.918329    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/old-k8s-version-094166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 04:17:49.625364    8937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5010/.minikube/profiles/addons-959667/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-509532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (6m34.474341708s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-509532 -n newest-cni-509532
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (394.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-298059 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: library/kong:3.9
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-298059 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-298059 -n no-preload-298059
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-298059 -n no-preload-298059: exit status 2 (214.930214ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-298059 -n no-preload-298059
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-298059 -n no-preload-298059: exit status 2 (211.697258ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-298059 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-298059 -n no-preload-298059
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-298059 -n no-preload-298059
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-244717 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-244717 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-244717 -n embed-certs-244717
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-244717 -n embed-certs-244717: exit status 2 (198.824083ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-244717 -n embed-certs-244717
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-244717 -n embed-certs-244717: exit status 2 (202.547921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-244717 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-244717 -n embed-certs-244717
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-244717 -n embed-certs-244717
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-168174 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-168174 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-168174 -n default-k8s-diff-port-168174
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-168174 -n default-k8s-diff-port-168174: exit status 2 (204.262056ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-168174 -n default-k8s-diff-port-168174
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-168174 -n default-k8s-diff-port-168174: exit status 2 (195.58895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-168174 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-168174 -n default-k8s-diff-port-168174
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-168174 -n default-k8s-diff-port-168174
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-509532 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-509532 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-509532 -n newest-cni-509532
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-509532 -n newest-cni-509532: exit status 2 (195.015865ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-509532 -n newest-cni-509532
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-509532 -n newest-cni-509532: exit status 2 (199.885022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-509532 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-509532 -n newest-cni-509532
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-509532 -n newest-cni-509532
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.33s)

                                                
                                    

Test skip (52/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.28
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
133 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0.01
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService 0.01
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0.01
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
357 TestNetworkPlugins/group/kubenet 3.35
367 TestNetworkPlugins/group/cilium 3.86
373 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-959667 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-542624 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-542624

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-542624

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-542624

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-542624

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-542624

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-542624

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-542624

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-542624

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-542624

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-542624

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-542624

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-542624" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-542624" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-542624

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-542624"

                                                
                                                
----------------------- debugLogs end: kubenet-542624 [took: 3.17650084s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-542624" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-542624
--- SKIP: TestNetworkPlugins/group/kubenet (3.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-542624 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-542624

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-542624

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-542624

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-542624

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-542624

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-542624

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-542624

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-542624

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-542624

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-542624

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-542624

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-542624" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-542624

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-542624

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-542624

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-542624

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-542624" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-542624" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-542624

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-542624" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-542624"

                                                
                                                
----------------------- debugLogs end: cilium-542624 [took: 3.687487972s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-542624" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-542624
--- SKIP: TestNetworkPlugins/group/cilium (3.86s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-189846" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-189846
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard